modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
mradermacher/TransLLaMA3-8B-GGUF
mradermacher
"2024-06-26T10:26:46Z"
11,002
0
transformers
[ "transformers", "gguf", "en", "base_model:TransLLaMA/TransLLaMA3-8B", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-26T09:56:19Z"
--- base_model: TransLLaMA/TransLLaMA3-8B language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TransLLaMA/TransLLaMA3-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA3-8B-GGUF/resolve/main/TransLLaMA3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Ramses-I-GGUF
mradermacher
"2024-06-23T10:06:30Z"
11,000
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:CoprolaliacPress/Ramses-I", "endpoints_compatible", "region:us" ]
null
"2024-06-23T07:32:09Z"
--- base_model: CoprolaliacPress/Ramses-I language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/CoprolaliacPress/Ramses-I <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-I-GGUF/resolve/main/Ramses-I.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-Elpis-8B-GGUF
mradermacher
"2024-06-22T17:59:41Z"
10,994
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:P0x0/L3-Elpis-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:52:10Z"
--- base_model: P0x0/L3-Elpis-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/P0x0/L3-Elpis-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Elpis-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Elpis-8B-GGUF/resolve/main/L3-Elpis-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase
1-800-BAD-CODE
"2023-07-15T20:42:28Z"
10,992
42
generic
[ "generic", "onnx", "nemo", "text2text-generation", "punctuation", "sentence-boundary-detection", "truecasing", "true-casing", "af", "am", "ar", "bg", "bn", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gu", "hi", "hr", "hu", "id", "is", "it", "ja", "kk", "kn", "ko", "ky", "lt", "lv", "mk", "ml", "mr", "nl", "or", "pa", "pl", "ps", "pt", "ro", "ru", "rw", "so", "sr", "sw", "ta", "te", "tr", "uk", "zh", "license:apache-2.0", "region:us" ]
text2text-generation
"2023-05-07T22:33:05Z"
--- license: apache-2.0 library_name: generic tags: - text2text-generation - punctuation - sentence-boundary-detection - truecasing - true-casing language: - af - am - ar - bg - bn - de - el - en - es - et - fa - fi - fr - gu - hi - hr - hu - id - is - it - ja - kk - kn - ko - ky - lt - lv - mk - ml - mr - nl - or - pa - pl - ps - pt - ro - ru - rw - so - sr - sw - ta - te - tr - uk - zh widget: - text: "hola amigo cómo estás es un día lluvioso hoy" - text: "please rsvp for the party asap preferably before 8 pm tonight" - text: "este modelo fue entrenado en un gpu a100 en realidad no se que dice esta frase lo traduje con nmt" - text: "此模型向文本添加标点符号它支持47种语言并在a100gpu上接受过训练它可以在每种语言上运行而无需每种语言的特殊路径" - text: "यह मॉडल 47 भाषाओं में विराम चिह्न जोड़ता है यह भाषा विशिष्ट पथ के बिना काम करता है यह प्रत्येक भाषा के लिए विशेष पथों के बिना प्रत्येक भाषा पर कार्य कर सकता है" --- # Model Overview This is an `xlm-roberta` fine-tuned to restore punctuation, true-case (capitalize), and detect sentence boundaries (full stops) in 47 languages. # Usage If you want to just play with the model, the widget on this page will suffice. To use the model offline, the following snippets show how to use the model both with a wrapper (that I wrote, available from `PyPI`) and manual usuage (using the ONNX and SentencePiece models in this repo). ## Usage via `punctuators` package <details> <summary>Click to see usage with wrappers</summary> The easiest way to use this model is to install [punctuators](https://github.com/1-800-BAD-CODE/punctuators): ```bash $ pip install punctuators ``` But this is just an ONNX and SentencePiece model, so you may run it as you wish. The input to the `punctuators` API is a list (batch) of strings. Each string will be punctuated, true-cased, and segmented on predicted full stops. The output will therefore be a list of list of strings: one list of segmented sentences per input text. To disable full stops, use `m.infer(texts, apply_sbd=False)`. The output will then be a list of strings: one punctuated, true-cased string per input text. <details open> <summary>Example Usage</summary> ```python from typing import List from punctuators.models import PunctCapSegModelONNX m: PunctCapSegModelONNX = PunctCapSegModelONNX.from_pretrained( "1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase" ) input_texts: List[str] = [ "hola mundo cómo estás estamos bajo el sol y hace mucho calor santa coloma abre los huertos urbanos a las escuelas de la ciudad", "hello friend how's it going it's snowing outside right now in connecticut a large storm is moving in", "未來疫苗將有望覆蓋3歲以上全年齡段美國與北約軍隊已全部撤離還有鐵路公路在內的各項基建的來源都將枯竭", "በባለፈው ሳምንት ኢትዮጵያ ከሶማሊያ 3 ሺህ ወታደሮቿንም እንዳስወጣች የሶማሊያው ዳልሳን ሬድዮ ዘግቦ ነበር ጸጥታ ሃይሉና ህዝቡ ተቀናጅቶ በመስራቱ በመዲናዋ ላይ የታቀደው የጥፋት ሴራ ከሽፏል", "こんにちは友人" "調子はどう" "今日は雨の日でしたね" "乾いた状態を保つために一日中室内で過ごしました", "hallo freund wie geht's es war heute ein regnerischer tag nicht wahr ich verbrachte den tag drinnen um trocken zu bleiben", "हैलो दोस्त ये कैसा चल रहा है आज बारिश का दिन था न मैंने सूखा रहने के लिए दिन घर के अंदर बिताया", "كيف تجري الامور كان يومًا ممطرًا اليوم أليس كذلك قضيت اليوم في الداخل لأظل جافًا", ] results: List[List[str]] = m.infer( texts=input_texts, apply_sbd=True, ) for input_text, output_texts in zip(input_texts, results): print(f"Input: {input_text}") print(f"Outputs:") for text in output_texts: print(f"\t{text}") print() ``` </details> <details open> <summary>Expected output</summary> ```text Input: hola mundo cómo estás estamos bajo el sol y hace mucho calor santa coloma abre los huertos urbanos a las escuelas de la ciudad Outputs: Hola mundo, ¿cómo estás? Estamos bajo el sol y hace mucho calor. Santa Coloma abre los huertos urbanos a las escuelas de la ciudad. Input: hello friend how's it going it's snowing outside right now in connecticut a large storm is moving in Outputs: Hello friend, how's it going? It's snowing outside right now. In Connecticut, a large storm is moving in. Input: 未來疫苗將有望覆蓋3歲以上全年齡段美國與北約軍隊已全部撤離還有鐵路公路在內的各項基建的來源都將枯竭 Outputs: 未來,疫苗將有望覆蓋3歲以上全年齡段。 美國與北約軍隊已全部撤離。 還有,鐵路,公路在內的各項基建的來源都將枯竭。 Input: በባለፈው ሳምንት ኢትዮጵያ ከሶማሊያ 3 ሺህ ወታደሮቿንም እንዳስወጣች የሶማሊያው ዳልሳን ሬድዮ ዘግቦ ነበር ጸጥታ ሃይሉና ህዝቡ ተቀናጅቶ በመስራቱ በመዲናዋ ላይ የታቀደው የጥፋት ሴራ ከሽፏል Outputs: በባለፈው ሳምንት ኢትዮጵያ ከሶማሊያ 3 ሺህ ወታደሮቿንም እንዳስወጣች የሶማሊያው ዳልሳን ሬድዮ ዘግቦ ነበር። ጸጥታ ሃይሉና ህዝቡ ተቀናጅቶ በመስራቱ በመዲናዋ ላይ የታቀደው የጥፋት ሴራ ከሽፏል። Input: こんにちは友人調子はどう今日は雨の日でしたね乾いた状態を保つために一日中室内で過ごしました Outputs: こんにちは、友人、調子はどう? 今日は雨の日でしたね。 乾いた状態を保つために、一日中、室内で過ごしました。 Input: hallo freund wie geht's es war heute ein regnerischer tag nicht wahr ich verbrachte den tag drinnen um trocken zu bleiben Outputs: Hallo Freund, wie geht's? Es war heute ein regnerischer Tag, nicht wahr? Ich verbrachte den Tag drinnen, um trocken zu bleiben. Input: हैलो दोस्त ये कैसा चल रहा है आज बारिश का दिन था न मैंने सूखा रहने के लिए दिन घर के अंदर बिताया Outputs: हैलो दोस्त, ये कैसा चल रहा है? आज बारिश का दिन था न, मैंने सूखा रहने के लिए दिन घर के अंदर बिताया। Input: كيف تجري الامور كان يومًا ممطرًا اليوم أليس كذلك قضيت اليوم في الداخل لأظل جافًا Outputs: كيف تجري الامور؟ كان يومًا ممطرًا اليوم، أليس كذلك؟ قضيت اليوم في الداخل لأظل جافًا. ``` </details> </details> ## Manual Usage If you want to use the ONNX and SP models without wrappers, see the following example. <details> <summary>Click to see manual usage</summary> ```python from typing import List import numpy as np import onnxruntime as ort from huggingface_hub import hf_hub_download from omegaconf import OmegaConf from sentencepiece import SentencePieceProcessor # Download the models from HF hub. Note: to clean up, you can find these files in your HF cache directory spe_path = hf_hub_download(repo_id="1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase", filename="sp.model") onnx_path = hf_hub_download(repo_id="1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase", filename="model.onnx") config_path = hf_hub_download( repo_id="1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase", filename="config.yaml" ) # Load the SP model tokenizer: SentencePieceProcessor = SentencePieceProcessor(spe_path) # noqa # Load the ONNX graph ort_session: ort.InferenceSession = ort.InferenceSession(onnx_path) # Load the model config with labels, etc. config = OmegaConf.load(config_path) # Potential classification labels before each subtoken pre_labels: List[str] = config.pre_labels # Potential classification labels after each subtoken post_labels: List[str] = config.post_labels # Special class that means "predict nothing" null_token = config.get("null_token", "<NULL>") # Special class that means "all chars in this subtoken end with a period", e.g., "am" -> "a.m." acronym_token = config.get("acronym_token", "<ACRONYM>") # Not used in this example, but if your sequence exceed this value, you need to fold it over multiple inputs max_len = config.max_length # For reference only, graph has no language-specific behavior languages: List[str] = config.languages # Encode some input text, adding BOS + EOS input_text = "hola mundo cómo estás estamos bajo el sol y hace mucho calor santa coloma abre los huertos urbanos a las escuelas de la ciudad" input_ids = [tokenizer.bos_id()] + tokenizer.EncodeAsIds(input_text) + [tokenizer.eos_id()] # Create a numpy array with shape [B, T], as the graph expects as input. # Note that we do not pass lengths to the graph; if you are using a batch, padding should be tokenizer.pad_id() and the # graph's attention mechanisms will ignore pad_id() without requiring explicit sequence lengths. input_ids_arr: np.array = np.array([input_ids]) # Run the graph, get outputs for all analytics pre_preds, post_preds, cap_preds, sbd_preds = ort_session.run(None, {"input_ids": input_ids_arr}) # Squeeze off the batch dimensions and convert to lists pre_preds = pre_preds[0].tolist() post_preds = post_preds[0].tolist() cap_preds = cap_preds[0].tolist() sbd_preds = sbd_preds[0].tolist() # Segmented sentences output_texts: List[str] = [] # Current sentence, which is built until we hit a sentence boundary prediction current_chars: List[str] = [] # Iterate over the outputs, ignoring the first (BOS) and final (EOS) predictions and tokens for token_idx in range(1, len(input_ids) - 1): token = tokenizer.IdToPiece(input_ids[token_idx]) # Simple SP decoding if token.startswith("▁") and current_chars: current_chars.append(" ") # Token-level predictions pre_label = pre_labels[pre_preds[token_idx]] post_label = post_labels[post_preds[token_idx]] # If we predict "pre-punct", insert it before this token if pre_label != null_token: current_chars.append(pre_label) # Iterate over each char. Skip SP's space token, char_start = 1 if token.startswith("▁") else 0 for token_char_idx, char in enumerate(token[char_start:], start=char_start): # If this char should be capitalized, apply upper case if cap_preds[token_idx][token_char_idx]: char = char.upper() # Append char current_chars.append(char) # if this is an acronym, add a period after every char (p.m., a.m., etc.) if post_label == acronym_token: current_chars.append(".") # Maybe this subtoken ends with punctuation if post_label != null_token and post_label != acronym_token: current_chars.append(post_label) # If this token is a sentence boundary, finalize the current sentence and reset if sbd_preds[token_idx]: output_texts.append("".join(current_chars)) current_chars.clear() # Maybe push final sentence, if the final token was not classified as a sentence boundary if current_chars: output_texts.append("".join(current_chars)) # Pretty print print(f"Input: {input_text}") print("Outputs:") for text in output_texts: print(f"\t{text}") ``` Expected output: ```text Input: hola mundo cómo estás estamos bajo el sol y hace mucho calor santa coloma abre los huertos urbanos a las escuelas de la ciudad Outputs: Hola mundo, ¿cómo estás? Estamos bajo el sol y hace mucho calor. Santa Coloma abre los huertos urbanos a las escuelas de la ciudad. ``` </details> &nbsp; # Model Architecture This model implements the following graph, which allows punctuation, true-casing, and fullstop prediction in every language without language-specific behavior: ![graph.png](https://cdn-uploads.huggingface.co/production/uploads/62d34c813eebd640a4f97587/WJ8aWIM4A--xzYu8FR4ht.png) <details> <summary>Click to see graph explanations</summary> We start by tokenizing the text and encoding it with XLM-Roberta, which is the pre-trained portion of this graph. Then we predict punctuation before and after every subtoken. Predicting before each token allows for Spanish inverted question marks. Predicting after every token allows for all other punctuation, including punctuation within continuous-script languages and acronyms. We use embeddings to represent the predicted punctuation tokens to inform the sentence boundary head of the punctuation that'll be inserted into the text. This allows proper full stop prediction, since certain punctuation tokens (periods, questions marks, etc.) are strongly correlated with sentence boundaries. We then shift full stop predictions to the right by one, to inform the true-casing head of where the beginning of each new sentence is. This is important since true-casing is strongly correlated to sentence boundaries. For true-casing, we predict `N` predictions per subtoken, where `N` is the number of characters in the subtoken. In practice, `N` is the maximum subtoken length and extra predictions are ignored. Essentially, true-casing is modeled as a multi-label problem. This allows for upper-casing arbitrary characters, e.g., "NATO", "MacDonald", "mRNA", etc. Applying all these predictions to the input text, we can punctuate, true-case, and split sentences in any language. </details> ## Tokenizer <details> <summary>Click to see how the XLM-Roberta tokenizer was un-hacked</summary> Instead of the hacky wrapper used by FairSeq and strangely ported (not fixed) by HuggingFace, the `xlm-roberta` SentencePiece model was adjusted to correctly encode the text. Per HF's comments, ```python # Original fairseq vocab and spm vocab must be "aligned": # Vocab | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 # -------- | ------- | ------- | ------ | ------- | --- | --- | --- | ----- | ----- | ---- # fairseq | '<s>' | '<pad>' | '</s>' | '<unk>' | ',' | '.' | '▁' | 's' | '▁de' | '-' # spm | '<unk>' | '<s>' | '</s>' | ',' | '.' | '▁' | 's' | '▁de' | '-' | '▁a' ``` The SP model was un-hacked with the following snippet (SentencePiece experts, let me know if there is a problem here): ```python from sentencepiece import SentencePieceProcessor from sentencepiece.sentencepiece_model_pb2 import ModelProto m = ModelProto() m.ParseFromString(open("/path/to/xlmroberta/sentencepiece.bpe.model", "rb").read()) pieces = list(m.pieces) pieces = ( [ ModelProto.SentencePiece(piece="<s>", type=ModelProto.SentencePiece.Type.CONTROL), ModelProto.SentencePiece(piece="<pad>", type=ModelProto.SentencePiece.Type.CONTROL), ModelProto.SentencePiece(piece="</s>", type=ModelProto.SentencePiece.Type.CONTROL), ModelProto.SentencePiece(piece="<unk>", type=ModelProto.SentencePiece.Type.UNKNOWN), ] + pieces[3:] + [ModelProto.SentencePiece(piece="<mask>", type=ModelProto.SentencePiece.Type.USER_DEFINED)] ) del m.pieces[:] m.pieces.extend(pieces) with open("/path/to/new/sp.model", "wb") as f: f.write(m.SerializeToString()) ``` Now we can use just the SP model without a wrapper. </details> ## Post-Punctuation Tokens This model predicts the following set of punctuation tokens after each subtoken: | Token | Description | Relevant Languages | | ---: | :---------- | :----------- | | \<NULL\> | No punctuation | All | | \<ACRONYM\> | Every character in this subword is followed by a period | Primarily English, some European | | . | Latin full stop | Many | | , | Latin comma | Many | | ? | Latin question mark | Many | | ? | Full-width question mark | Chinese, Japanese | | , | Full-width comma | Chinese, Japanese | | 。 | Full-width full stop | Chinese, Japanese | | 、 | Ideographic comma | Chinese, Japanese | | ・ | Middle dot | Japanese | | । | Danda | Hindi, Bengali, Oriya | | ؟ | Arabic question mark | Arabic | | ; | Greek question mark | Greek | | ። | Ethiopic full stop | Amharic | | ፣ | Ethiopic comma | Amharic | | ፧ | Ethiopic question mark | Amharic | ## Pre-Punctuation Tokens This model predicts the following set of punctuation tokens before each subword: | Token | Description | Relevant Languages | | ---: | :---------- | :----------- | | \<NULL\> | No punctuation | All | | ¿ | Inverted question mark | Spanish | # Training Details This model was trained in the NeMo framework on an A100 for approximately 7 hours. You may view the `tensorboard` log on [tensorboard.dev](https://tensorboard.dev/experiment/xxnULI1aTeK37vUDL4ejiw/#scalars). This model was trained with News Crawl data from WMT. 1M lines of text for each language was used, except for a few low-resource languages which may have used less. Languages were chosen based on whether the News Crawl corpus contained enough reliable-quality data as judged by the author. # Limitations This model was trained on news data, and may not perform well on conversational or informal data. This model is unlikely to be of production quality. It was trained with "only" 1M lines per language, and the dev sets may have been noisy due to the nature of web-scraped news data. This model over-predicts Spanish question marks, especially the inverted question mark `¿` (see metrics below). Since `¿` is a rare token, especially in the context of a 47-language model, Spanish questions were over-sampled by selecting more of these sentences from additional training data that was not used. However, this seems to have "over-corrected" the problem and a lot of Spanish question marks are predicted. The model may also over-predict commas. If you find any general limitations not mentioned here, let me know so all limitations can be addressed in the next fine-tuning. # Evaluation In these metrics, keep in mind that 1. The data is noisy 2. Sentence boundaries and true-casing are conditioned on predicted punctuation, which is the most difficult task and sometimes incorrect. When conditioning on reference punctuation, true-casing and SBD is practically 100% for most languages. 4. Punctuation can be subjective. E.g., `Hola mundo, ¿cómo estás?` or `Hola mundo. ¿Cómo estás?` When the sentences are longer and more practical, these ambiguities abound and affect all 3 analytics. ## Test Data and Example Generation Each test example was generated using the following procedure: 1. Concatenate 11 random sentences (1 + 10 for each sentence in the test set) 2. Lower-case the concatenated sentence 3. Remove all punctuation Targets are generated as we lower-case letters and remove punctuation. The data is a held-out portion of News Crawl, which has been deduplicated. 3,000 lines of data per language was used, generating 3,000 unique examples of 11 sentences each. We generate 3,000 examples, where example `i` begins with sentence `i` and is followed by 10 random sentences selected from the 3,000 sentence test set. For measuring true-casing and sentence boundary detection, reference punctuation tokens were used for conditioning (see graph above). If we use predicted punctuation instead, then incorrect punctuation will result in true-casing and SBD targets not aligning correctly and these metrics will be artificially low. ## Selected Language Evaluation Reports For now, metrics for a few selected languages are shown below. Given the amount of work required to collect and pretty-print metrics in 47 languages, I'll add more eventually. Expand any of the following tabs to see metrics for that language. <details> <summary>English</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.25 98.43 98.84 564908 <ACRONYM> (label_id: 1) 63.14 84.67 72.33 613 . (label_id: 2) 90.97 93.91 92.42 32040 , (label_id: 3) 73.95 84.32 78.79 24271 ? (label_id: 4) 79.05 81.94 80.47 1041 ? (label_id: 5) 0.00 0.00 0.00 0 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 0.00 0.00 0.00 0 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 97.60 97.60 97.60 622873 macro avg 81.27 88.65 84.57 622873 weighted avg 97.77 97.60 97.67 622873 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 99.72 99.85 99.78 2134956 UPPER (label_id: 1) 96.33 93.52 94.91 91996 ------------------- micro avg 99.59 99.59 99.59 2226952 macro avg 98.03 96.68 97.34 2226952 weighted avg 99.58 99.59 99.58 2226952 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.98 99.99 591540 FULLSTOP (label_id: 1) 99.61 99.89 99.75 34333 ------------------- micro avg 99.97 99.97 99.97 625873 macro avg 99.80 99.93 99.87 625873 weighted avg 99.97 99.97 99.97 625873 ``` </details> <details> <summary>Spanish</summary> ```text punct_pre test report: label precision recall f1 support <NULL> (label_id: 0) 99.94 99.89 99.92 636941 ¿ (label_id: 1) 56.73 71.35 63.20 1288 ------------------- micro avg 99.83 99.83 99.83 638229 macro avg 78.34 85.62 81.56 638229 weighted avg 99.85 99.83 99.84 638229 ``` ``` punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.19 98.41 98.80 578271 <ACRONYM> (label_id: 1) 30.10 56.36 39.24 55 . (label_id: 2) 91.92 93.12 92.52 30856 , (label_id: 3) 72.98 82.44 77.42 27761 ? (label_id: 4) 52.77 71.85 60.85 1286 ? (label_id: 5) 0.00 0.00 0.00 0 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 0.00 0.00 0.00 0 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 97.40 97.40 97.40 638229 macro avg 69.39 80.44 73.77 638229 weighted avg 97.60 97.40 97.48 638229 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 99.82 99.86 99.84 2324724 UPPER (label_id: 1) 95.92 94.70 95.30 79266 ------------------- micro avg 99.69 99.69 99.69 2403990 macro avg 97.87 97.28 97.57 2403990 weighted avg 99.69 99.69 99.69 2403990 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.96 99.98 607057 FULLSTOP (label_id: 1) 99.31 99.88 99.60 34172 ------------------- micro avg 99.96 99.96 99.96 641229 macro avg 99.65 99.92 99.79 641229 weighted avg 99.96 99.96 99.96 641229 ``` </details> <details> <summary>Amharic</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.83 99.28 99.56 729664 <ACRONYM> (label_id: 1) 0.00 0.00 0.00 0 . (label_id: 2) 0.00 0.00 0.00 0 , (label_id: 3) 0.00 0.00 0.00 0 ? (label_id: 4) 0.00 0.00 0.00 0 ? (label_id: 5) 0.00 0.00 0.00 0 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 0.00 0.00 0.00 0 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 91.27 97.90 94.47 25341 ፣ (label_id: 15) 61.93 82.11 70.60 5818 ፧ (label_id: 16) 67.41 81.73 73.89 1177 ------------------- micro avg 99.08 99.08 99.08 762000 macro avg 80.11 90.26 84.63 762000 weighted avg 99.21 99.08 99.13 762000 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 98.40 98.03 98.21 1064 UPPER (label_id: 1) 71.23 75.36 73.24 69 ------------------- micro avg 96.65 96.65 96.65 1133 macro avg 84.81 86.69 85.73 1133 weighted avg 96.74 96.65 96.69 1133 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.85 99.92 743158 FULLSTOP (label_id: 1) 95.20 99.62 97.36 21842 ------------------- micro avg 99.85 99.85 99.85 765000 macro avg 97.59 99.74 98.64 765000 weighted avg 99.85 99.85 99.85 765000 ``` </details> <details> <summary>Chinese</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.53 97.31 98.41 435611 <ACRONYM> (label_id: 1) 0.00 0.00 0.00 0 . (label_id: 2) 0.00 0.00 0.00 0 , (label_id: 3) 0.00 0.00 0.00 0 ? (label_id: 4) 0.00 0.00 0.00 0 ? (label_id: 5) 81.85 87.31 84.49 1513 , (label_id: 6) 74.08 93.67 82.73 35921 。 (label_id: 7) 96.51 96.93 96.72 32097 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 97.00 97.00 97.00 505142 macro avg 87.99 93.81 90.59 505142 weighted avg 97.48 97.00 97.15 505142 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 94.89 94.98 94.94 2951 UPPER (label_id: 1) 81.34 81.03 81.18 796 ------------------- micro avg 92.02 92.02 92.02 3747 macro avg 88.11 88.01 88.06 3747 weighted avg 92.01 92.02 92.01 3747 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.97 99.98 473642 FULLSTOP (label_id: 1) 99.55 99.90 99.72 34500 ------------------- micro avg 99.96 99.96 99.96 508142 macro avg 99.77 99.93 99.85 508142 weighted avg 99.96 99.96 99.96 508142 ``` </details> <details> <summary>Japanese</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.34 95.90 97.59 406341 <ACRONYM> (label_id: 1) 0.00 0.00 0.00 0 . (label_id: 2) 0.00 0.00 0.00 0 , (label_id: 3) 0.00 0.00 0.00 0 ? (label_id: 4) 0.00 0.00 0.00 0 ? (label_id: 5) 70.55 73.56 72.02 1456 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 94.38 96.95 95.65 32537 、 (label_id: 8) 54.28 87.62 67.03 18610 ・ (label_id: 9) 28.18 71.64 40.45 1100 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 95.51 95.51 95.51 460044 macro avg 69.35 85.13 74.55 460044 weighted avg 96.91 95.51 96.00 460044 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 92.33 94.03 93.18 4174 UPPER (label_id: 1) 83.51 79.46 81.43 1587 ------------------- micro avg 90.02 90.02 90.02 5761 macro avg 87.92 86.75 87.30 5761 weighted avg 89.90 90.02 89.94 5761 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.92 99.96 428544 FULLSTOP (label_id: 1) 99.07 99.87 99.47 34500 ------------------- micro avg 99.92 99.92 99.92 463044 macro avg 99.53 99.90 99.71 463044 weighted avg 99.92 99.92 99.92 463044 ``` </details> <details> <summary>Hindi</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.75 99.44 99.59 560358 <ACRONYM> (label_id: 1) 0.00 0.00 0.00 0 . (label_id: 2) 0.00 0.00 0.00 0 , (label_id: 3) 69.55 78.48 73.75 8084 ? (label_id: 4) 63.30 87.07 73.31 317 ? (label_id: 5) 0.00 0.00 0.00 0 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 0.00 0.00 0.00 0 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 96.92 98.66 97.78 32118 ؟ (label_id: 11) 0.00 0.00 0.00 0 ، (label_id: 12) 0.00 0.00 0.00 0 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 99.11 99.11 99.11 600877 macro avg 82.38 90.91 86.11 600877 weighted avg 99.17 99.11 99.13 600877 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 97.19 96.72 96.95 2466 UPPER (label_id: 1) 89.14 90.60 89.86 734 ------------------- micro avg 95.31 95.31 95.31 3200 macro avg 93.17 93.66 93.41 3200 weighted avg 95.34 95.31 95.33 3200 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 100.00 99.99 99.99 569472 FULLSTOP (label_id: 1) 99.82 99.99 99.91 34405 ------------------- micro avg 99.99 99.99 99.99 603877 macro avg 99.91 99.99 99.95 603877 weighted avg 99.99 99.99 99.99 603877 ``` </details> <details> <summary>Arabic</summary> ```text punct_post test report: label precision recall f1 support <NULL> (label_id: 0) 99.30 96.94 98.10 688043 <ACRONYM> (label_id: 1) 93.33 77.78 84.85 18 . (label_id: 2) 93.31 93.78 93.54 28175 , (label_id: 3) 0.00 0.00 0.00 0 ? (label_id: 4) 0.00 0.00 0.00 0 ? (label_id: 5) 0.00 0.00 0.00 0 , (label_id: 6) 0.00 0.00 0.00 0 。 (label_id: 7) 0.00 0.00 0.00 0 、 (label_id: 8) 0.00 0.00 0.00 0 ・ (label_id: 9) 0.00 0.00 0.00 0 । (label_id: 10) 0.00 0.00 0.00 0 ؟ (label_id: 11) 65.93 82.79 73.40 860 ، (label_id: 12) 44.89 79.20 57.30 20941 ; (label_id: 13) 0.00 0.00 0.00 0 ። (label_id: 14) 0.00 0.00 0.00 0 ፣ (label_id: 15) 0.00 0.00 0.00 0 ፧ (label_id: 16) 0.00 0.00 0.00 0 ------------------- micro avg 96.29 96.29 96.29 738037 macro avg 79.35 86.10 81.44 738037 weighted avg 97.49 96.29 96.74 738037 ``` ``` cap test report: label precision recall f1 support LOWER (label_id: 0) 97.10 99.49 98.28 4137 UPPER (label_id: 1) 98.71 92.89 95.71 1729 ------------------- micro avg 97.55 97.55 97.55 5866 macro avg 97.90 96.19 96.99 5866 weighted avg 97.57 97.55 97.52 5866 ``` ``` seg test report: label precision recall f1 support NOSTOP (label_id: 0) 99.99 99.97 99.98 710456 FULLSTOP (label_id: 1) 99.39 99.85 99.62 30581 ------------------- micro avg 99.97 99.97 99.97 741037 macro avg 99.69 99.91 99.80 741037 weighted avg 99.97 99.97 99.97 741037 ``` </details> &nbsp; # Extra Stuff ## Acronyms, abbreviations, and bi-capitalized words This section briefly demonstrates the models behavior when presented with the following: 1. Acronyms: "NATO" 2. Fake acronyms: "NHTG" in place of "NATO" 3. Ambigous term which could be an acronym or proper noun: "Tuny" 3. Bi-capitalized words: "McDavid" 4. Intialisms: "p.m." <details open> <summary>Acronyms, etc. inputs</summary> ```python from typing import List from punctuators.models import PunctCapSegModelONNX m: PunctCapSegModelONNX = PunctCapSegModelONNX.from_pretrained( "1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase" ) input_texts = [ "the us is a nato member as a nato member the country enjoys security guarantees notably article 5", "the us is a nhtg member as a nhtg member the country enjoys security guarantees notably article 5", "the us is a tuny member as a tuny member the country enjoys security guarantees notably article 5", "connor andrew mcdavid is a canadian professional ice hockey centre and captain of the edmonton oilers of the national hockey league the oilers selected him first overall in the 2015 nhl entry draft mcdavid spent his childhood playing ice hockey against older children", "please rsvp for the party asap preferably before 8 pm tonight", ] results: List[List[str]] = m.infer( texts=input_texts, apply_sbd=True, ) for input_text, output_texts in zip(input_texts, results): print(f"Input: {input_text}") print(f"Outputs:") for text in output_texts: print(f"\t{text}") print() ``` </details> <details open> <summary>Expected output</summary> ```text Input: the us is a nato member as a nato member the country enjoys security guarantees notably article 5 Outputs: The U.S. is a NATO member. As a NATO member, the country enjoys security guarantees, notably Article 5. Input: the us is a nhtg member as a nhtg member the country enjoys security guarantees notably article 5 Outputs: The U.S. is a NHTG member. As a NHTG member, the country enjoys security guarantees, notably Article 5. Input: the us is a tuny member as a tuny member the country enjoys security guarantees notably article 5 Outputs: The U.S. is a Tuny member. As a Tuny member, the country enjoys security guarantees, notably Article 5. Input: connor andrew mcdavid is a canadian professional ice hockey centre and captain of the edmonton oilers of the national hockey league the oilers selected him first overall in the 2015 nhl entry draft mcdavid spent his childhood playing ice hockey against older children Outputs: Connor Andrew McDavid is a Canadian professional ice hockey centre and captain of the Edmonton Oilers of the National Hockey League. The Oilers selected him first overall in the 2015 NHL entry draft. McDavid spent his childhood playing ice hockey against older children. Input: please rsvp for the party asap preferably before 8 pm tonight Outputs: Please RSVP for the party ASAP, preferably before 8 p.m. tonight. ``` </details>
mradermacher/Llama-3-8B-OpenHermes-243K-GGUF
mradermacher
"2024-06-28T11:42:59Z"
10,987
0
transformers
[ "transformers", "gguf", "axolotl", "generated_from_trainer", "en", "base_model:Magpie-Align/Llama-3-8B-OpenHermes-243K", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-27T01:19:28Z"
--- base_model: Magpie-Align/Llama-3-8B-OpenHermes-243K language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - axolotl - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Magpie-Align/Llama-3-8B-OpenHermes-243K <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-OpenHermes-243K-GGUF/resolve/main/Llama-3-8B-OpenHermes-243K.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/SharkOgno-7b-Task-GGUF
mradermacher
"2024-06-22T19:06:42Z"
10,984
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "powermove72/Shark-1", "eren23/OGNO-7b-dpo-truthful", "en", "base_model:powermove72/SharkOgno-7b-Task", "endpoints_compatible", "region:us" ]
null
"2024-06-22T17:31:07Z"
--- base_model: powermove72/SharkOgno-7b-Task language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - powermove72/Shark-1 - eren23/OGNO-7b-dpo-truthful --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/SharkOgno-7b-Task <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SharkOgno-7b-Task-GGUF/resolve/main/SharkOgno-7b-Task.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TensorFamily/SigmaJourney
TensorFamily
"2024-06-28T14:36:11Z"
10,978
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "full", "pixart", "pixart sigma", "base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", "license:creativeml-openrail-m", "diffusers:PixArtSigmaPipeline", "region:us" ]
text-to-image
"2024-06-21T02:33:18Z"
--- base_model: PixArt-alpha/PixArt-Sigma-XL-2-1024-MS library_name: diffusers license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - full - pixart - pixart sigma inference: true widget: - text: A blonde sexy girl, wearing glasses at latex shirt and a blue beanie with a tattoo, blue and white, highly detailed, sublime, extremely beautiful, sharp focus, refined, cinematic, intricate, elegant, dynamic, rich deep colors, bright color, shining light, attractive, cute, pretty, background full, epic composition, dramatic atmosphere, radiant, professional, stunning parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/1.png - text: a wizard with a glowing staff and a glowing hat, colorful magic, dramatic atmosphere, sharp focus, highly detailed, cinematic, original composition, fine detail, intricate, elegant, creative, color spread, shiny, amazing, symmetry, illuminated, inspired, pretty, attractive, artistic, dynamic background, relaxed, professional, extremely inspirational, beautiful, determined, cute, adorable, best parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/2.png - text: girl in modern car, intricate, elegant, highly detailed, extremely complimentary colors, beautiful, glowing aesthetic, pretty, dramatic light, sharp focus, perfect composition, clear artistic color, calm professional background, precise, joyful, emotional, unique, cute, best, gorgeous, great delicate, expressive, thought, iconic, fine, awesome, creative, winning, charming, enhanced parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/3.png - text: A girl stands amidst scattered glass shards, surrounded by a beautifully crafted and expansive world. The scene is depicted from a dynamic angle, emphasizing her determined expression. The background features vast landscapes with floating crystals and soft, glowing lights that create a mystical and grand atmosphere. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/ComfyUI_PixArt_00040_.png - text: A girl stands amidst scattered glass shards, surrounded by a beautifully crafted and expansive world. The scene is depicted from a dynamic angle, emphasizing her determined expression. The background features vast landscapes with floating crystals and soft, glowing lights that create a mystical and grand atmosphere. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/ComfyUI_PixArt_00036_.png - text: A close-up shot of a beautiful girl in a serene world. She has white hair and is blindfolded, with a calm expression. Her hands are pressed together in a prayer pose, with fingers interlaced and palms touching. The background is softly blurred, enhancing her ethereal presence. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/ComfyUI_PixArt_00041_.png --- # SigmaJourney: PixartSigma + MidJourney v6 <Gallery /> ## Inference ### ComfyUI - Download model file `transformer/diffusion_pytorch_model.safetensors` and put into `ComfyUI/models/checkpoints` - Use ExtraModels node: https://github.com/city96/ComfyUI_ExtraModels?tab=readme-ov-file#pixart ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643c7e91b409fef15e0bd11b/MJfTShin1fYOOCo4mTv2-.png) ```python import torch from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler from diffusers.models import PixArtTransformer2DModel model_id = "TensorFamily/SigmaJourney" negative_prompt = "malformed, disgusting, overexposed, washed-out" pipeline = DiffusionPipeline.from_pretrained("PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", torch_dtype=torch.float16) pipeline.transformer = PixArtTransformer2DModel.from_pretrained(model_id, subfolder="transformer", torch_dtype=torch.float16) pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) pipeline.to('cuda' if torch.cuda.is_available() else 'cpu') prompt = "On the left, there is a red cube. On the right, there is a blue sphere. On top of the red cube is a dog. On top of the blue sphere is a cat" image = pipeline( prompt=prompt, negative_prompt='blurry, cropped, ugly', num_inference_steps=30, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=5.5, ).images[0] image.save("output.png", format="JPEG") ```
QuantFactory/Arcee-Spark-GGUF
QuantFactory
"2024-06-25T10:32:53Z"
10,976
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T08:15:46Z"
Entry not found
neulab/codebert-javascript
neulab
"2023-02-27T20:56:02Z"
10,975
9
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2302.05527", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-09-23T15:19:35Z"
This is a `microsoft/codebert-base-mlm` model, trained for 1,000,000 steps (with `batch_size=32`) on **JavaScript** code from the `codeparrot/github-code-clean` dataset, on the masked-language-modeling task. It is intended to be used in CodeBERTScore: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score), but can be used for any other model or task. For more information, see: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score) ## Citation If you use this model for research, please cite: ``` @article{zhou2023codebertscore, url = {https://arxiv.org/abs/2302.05527}, author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham}, title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code}, publisher = {arXiv}, year = {2023}, } ```
WizardLMTeam/WizardMath-7B-V1.1
WizardLMTeam
"2024-01-12T11:39:28Z"
10,961
72
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-19T08:09:17Z"
--- inference: false language: - en pipeline_tag: text-generation --- ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p style="font-size:28px;" align="center"> 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p> <p align="center"> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p> <p align="center"> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News [12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it. [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1. [12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1. | Model | Checkpoint | Paper | GSM8k | MATH | Demo| | ----- |------| ---- |------|-------|-------| | **WizardMath-7B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** || | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** || | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | MPT-7B | 6.8 | 3.0 | |Llama 1-7B | 11.0 | 2.9 | |Llama 2-7B|12.3 |2.8 | |Yi-6b| 32.6 |5.8 | |Mistral-7B|37.8 |9.1 | |Qwen-7b|47.8 |9.3 | | RFT-7B | 50.3 | -- | | MAmmoTH-7B (COT) | 50.5 | 10.4 | | WizardMath-7B-V1.0 | 54.9 | 10.7 | |Abel-7B-001 |59.7 |13 | | MetaMath-7B | 66.5 | 19.8 | | Arithmo-Mistral-7B | 74.7 | 25.3 | |MetaMath-Mistral-7B|77.7 |28.2 | |Abel-7B-002 | 80.4 | 29.5 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## [12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) LLMs. | Model | GSM8k Pass@1 | MATH Pass@1 | | ----- |------| ---- | | Llemma-34B | 51.5 | 25.0 | | Minerva-62B | 52.4 | 27.6 | | Llama 2-70B | 56.8 | 13.5 | | DeepSeek 67B | 63.4 | -- | | Gork 33B | 62.9 | 23.9 | | MAmmoTH-70B | 72.4 | 21.1 | | Yi-34B | 67.9 | 15.9 | | Mixtral 8x7B | 74.4 | 28.4 | | MetaMath-70B | 82.3 | 26.6 | | **WizardMath-7B-V1.1** | **83.2** | **33.0** | ## ❗ Data Contamination Check: Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on GSM8k and MATH test set. 🔥 ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ```
bartowski/Qwen2-7B-Instruct-deccp-GGUF
bartowski
"2024-06-10T18:53:11Z"
10,961
5
null
[ "gguf", "text-generation", "en", "zh", "dataset:augmxnt/deccp", "base_model:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-09T14:14:11Z"
--- license: apache-2.0 datasets: - augmxnt/deccp language: - en - zh base_model: Qwen/Qwen2-7B-Instruct quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Qwen2-7B-Instruct-deccp Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization. Original model: https://huggingface.co/augmxnt/Qwen2-7B-Instruct-deccp All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Qwen2-7B-Instruct-deccp-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q8_0.gguf) | Q8_0 | 8.09GB | Extremely high quality, generally unneeded but max available quant. | | [Qwen2-7B-Instruct-deccp-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q6_K.gguf) | Q6_K | 6.25GB | Very high quality, near perfect, *recommended*. | | [Qwen2-7B-Instruct-deccp-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q5_K_M.gguf) | Q5_K_M | 5.44GB | High quality, *recommended*. | | [Qwen2-7B-Instruct-deccp-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q5_K_S.gguf) | Q5_K_S | 5.31GB | High quality, *recommended*. | | [Qwen2-7B-Instruct-deccp-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q4_K_M.gguf) | Q4_K_M | 4.68GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Qwen2-7B-Instruct-deccp-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q4_K_S.gguf) | Q4_K_S | 4.45GB | Slightly lower quality with more space savings, *recommended*. | | [Qwen2-7B-Instruct-deccp-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ4_XS.gguf) | IQ4_XS | 4.21GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Qwen2-7B-Instruct-deccp-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_L.gguf) | Q3_K_L | 4.08GB | Lower quality but usable, good for low RAM availability. | | [Qwen2-7B-Instruct-deccp-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. | | [Qwen2-7B-Instruct-deccp-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_M.gguf) | IQ3_M | 3.57GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Qwen2-7B-Instruct-deccp-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q3_K_S.gguf) | Q3_K_S | 3.49GB | Low quality, not recommended. | | [Qwen2-7B-Instruct-deccp-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_XS.gguf) | IQ3_XS | 3.34GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Qwen2-7B-Instruct-deccp-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Qwen2-7B-Instruct-deccp-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-Q2_K.gguf) | Q2_K | 3.01GB | Very low quality but surprisingly usable. | | [Qwen2-7B-Instruct-deccp-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_M.gguf) | IQ2_M | 2.78GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Qwen2-7B-Instruct-deccp-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_S.gguf) | IQ2_S | 2.59GB | Very low quality, uses SOTA techniques to be usable. | | [Qwen2-7B-Instruct-deccp-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2-7B-Instruct-deccp-GGUF/blob/main/Qwen2-7B-Instruct-deccp-IQ2_XS.gguf) | IQ2_XS | 2.46GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Qwen2-7B-Instruct-deccp-GGUF --include "Qwen2-7B-Instruct-deccp-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Qwen2-7B-Instruct-deccp-GGUF --include "Qwen2-7B-Instruct-deccp-Q8_0.gguf/*" --local-dir Qwen2-7B-Instruct-deccp-Q8_0 ``` You can either specify a new local-dir (Qwen2-7B-Instruct-deccp-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
QuantFactory/llm-compiler-7b-ftd-GGUF
QuantFactory
"2024-06-28T14:26:15Z"
10,952
0
null
[ "gguf", "text-generation", "base_model:facebook/llm-compiler-7b-ftd", "license:other", "region:us" ]
text-generation
"2024-06-28T12:26:23Z"
--- license: other base_model: facebook/llm-compiler-7b-ftd pipeline_tag: text-generation --- # QuantFactory/llm-compiler-7b-ftd-GGUF This is quantized version of [facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) created using llama.cpp The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). **Notice :** LLM Compiler is licensed under the LLM Compiler License, Copyright © Meta Platforms, Inc. All Rights Reserved. # Introducing Meta Large Language Model Compiler (LLM Compiler), a state-of-the-art LLM for compiler optimization ## Takeaways * LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning. * LLM Compiler is free for both research and commercial use. * LLM Compiler is available in two flavors: * _LLM Compiler_, the foundational models, pretrained on over 500B tokens of LLVM-IR, x86_84, ARM, and CUDA assembly codes and trained to predict the effect of LLVM optimizations; * and _LLM Compiler FTD_, which is further fine-tuned to predict the best optimizations for code in LLVM assembly to reduce code size, and to disassemble assembly code to LLVM-IR. * LLM Compiler demonstrates far stronger understanding of compiler optimizations than existing publicly available LLMs, perfectly emulating the compiler 20% of the time. * LLM Compiler FTD sets state-of-the-art results on the tasks of optimization for code size and disassembly. It achieves a 5.24% code size improvement over -Oz vs GPT-4 Turbo 0.03%, and 0.96 round-trip BLEU score on disassembly vs GPT-4 Turbo 0.43. --- LINKS * [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/) * Download the LLM Compiler and LLM Compiler FTD models: * [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) * [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) * [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) * [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) --- We are excited to announce the release of LLM Compiler, a model targeted at code and compiler optimization tasks. LLM Compiler is built on top of our state-of-the-art large language model, Code Llama, adding capabilities to better understand compiler intermediate representations, assembly language and optimization. LLM Compiler is demonstrated on two difficult tasks: optimizing for code size and decompiling from assembly to the compiler’s intermediate representation. We release these foundation models to accelerate the application of LLMs for code optimization tasks and to enhance developer experience. We are releasing LLM Compiler under the [LLM Compiler License Agreement](LICENSE.pdf), which incorporates the [Acceptable Use Policy]([https://llama.meta.com/llama3/use-policy]) for Llama Materials. ## How LLM Compiler works LLM Compiler is a specialization of Code Llama. It is a cutting-edge tool designed to optimize code using deep learning. LLM Compiler has been pre-trained on a vast amount of LLVM assembly (IR), x86_64, ARM, and CUDA assembly codes. LLM Compiler can predict, given a piece of LLVM assembly and a sequence of optimization passes for `opt`, the LLVM optimizer, what the change in code size will be and what the output code will look like after applying these optimizations. It has ‘understood’ the behavior of the optimizing compiler to such a degree that in many cases it can perfectly replicate its output. These capabilities make it ideally suited to compiler optimization tasks. ![Compiler emulation](readme/emulate.png) In addition to this core functionality and to demonstrate its ability to solve complex compiler optimization problems, LLM Compiler has been fine-tuned for two specific downstream tasks: 1. Predicting the best optimization passes for `opt` to use in order to minimize code size, given a piece of LLVM assembly code. \ ![Autotuning](readme/autotune.png) 2. Generating LLVM IR from a piece of x86_64 or ARM assembly code. \ ![Disassemble](readme/disassemble.png) We are releasing LLM Compiler models in two sizes: 7B and 13B parameters. The models have been trained with a context window of 16,000 tokens. The two models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU and is more suitable for tasks that require low latency, like fine grained optimisation. The 13B model returns the best results. When using the LLM Compiler models, users must abide by our license and acceptable use policy. ![Training](readme/training.png) ## LLM Compiler performance We tested the performance of LLM Compiler models for emulating compiler transformations, predicting optimal pass lists and decompiling intermediate representation on hold out test sets and compared them to Code Llama and GPT-4. We compare LLM Compiler Foundation to Code Llama Base and LLM Compiler FTD to Code Llama Instruct. We evaluate LLM Compiler's ability to emulate compiler optimizations by giving it samples of unoptimized intermediate representation and a randomly generated list of optimizations. We then ask the model to generate the corresponding IR after the optimizations have been applied. In the table below we report the model's accuracy in reproducing the IR we would get from running _opt_. With very little knowledge of IR, Code Llama is unable to achieve high values while the LLM Compiler can generate character-by-character matches of expected assembly in 20% of the cases. <table> <tr> <td>Model </td> <td>Size </td> <td>Accuracy at emulating compiler optimizations </td> </tr> <tr> <td>Code Llama </td> <td>7B </td> <td>1.2% </td> </tr> <tr> <td>Code Llama </td> <td>13B </td> <td>0.8% </td> </tr> <tr> <td>LLM Compiler </td> <td>7B </td> <td>16% </td> </tr> <tr> <td>LLM Compiler </td> <td>13B </td> <td><strong>20%</strong> </td> </tr> </table> In a similar approach we evaluate our model's ability to optimize IR for code size. In this instance, however, we let the model generate the pass list that is to be used on a given unoptimized IR. We then use this pass list to optimize the particular program using _opt_ and record the binary size. The baseline is the binary size of the program when optimized using -Oz. Only LLM Compiler FTD models provide an improvement over -Oz, with the 13B parameter model marginally outperforming the smaller model, generating smaller object files than -Oz in 61% of cases. Lastly, we evaluate disassembly performance by giving the model x86 assembly code and ask it to generate the corresponding IR. We then round-trip the model-generated disassembled IR back down to assembly. This enables us to evaluate accuracy of the disassembly by comparing the BLEU score of the original assembly against the round-trip result. LLM Compiler FTD 13B has the highest accuracy of round-tripped assembly (_round trip BLEU_) and most frequently produces perfect disassembly. Code Llama Instruct and GPT-4 Turbo struggle with generating syntactically correct LLVM-IR. <table> <tr> <td>Model </td> <td>Size </td> <td>Code Size Improvement </td> <td>Round trip BLEU </td> </tr> <tr> <td>GPT-4 Turbo </td> <td> </td> <td>-0.01% </td> <td>0.43 </td> </tr> <tr> <td>Code Llama Inst </td> <td>7B </td> <td>-0.49% </td> <td>0.48 </td> </tr> <tr> <td>Code Llama Inst </td> <td>13B </td> <td>-0.42% </td> <td>0.62 </td> </tr> <tr> <td>LLM Compiler FTD </td> <td>7B </td> <td>4.77% </td> <td>0.95 </td> </tr> <tr> <td>LLM Compiler FTD </td> <td>13B </td> <td><strong>4.88%</strong> </td> <td><strong>0.96</strong> </td> </tr> </table> ## Releasing LLM Compiler LLMs are being used to make programming easier. They are beginning to be used to make programs more efficient. At Meta, our conviction is that AI models, especially those designed for coding, thrive best with an open strategy, fostering both innovation and security. Models that are accessible to the public can expedite the creation of novel compiler optimization technologies. In turn, this will allow programs to be more efficient and smaller, enhancing the quality of life for all. By making models such as LLM Compiler available, the whole community can explore their potential, pinpoint problems, and rectify any vulnerabilities. The model weights are available on Hugging Face. ## Responsible use Our research paper provides an in-depth look into the development process of the LLM Compiler, the methods we used for our benchmarking tests, and further insights into the model's limitations. It also discusses the issues faced, the steps we took to mitigate them. Developers are advised to assess their models using evaluation benchmarks specific to compilers. Given that compilers are not bug-free, any suggested compiler optimizations must be rigorously tested. When a model decompiles assembly code, its accuracy should be confirmed. ## The future of generative AI for optimisation LLM Compiler is designed to support compiler researchers and engineers. But there are still many more use cases to support than what our models can serve. We hope that LLM Compiler will inspire others to leverage LLMs to create new innovative tools for research and commercial products. ### Try LLM Compiler today * Download the LLM Compiler and LLM Compiler FTD models: * [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) * [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) * [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) * [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) * Read the research paper * [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/) # **Model Card** LLM Compiler is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 13 billion parameters. This is the repository for the 13 billion parameter foundation model version in the Hugging Face Transformers format. This model is designed for code optimization. Links to other models can be found in the index at the bottom. | Number of parameters | Base Model | Fine-tuned for code size and dissassembly | | -------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) | [facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) | | 13B | [facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) | [facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) | ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers accelerate ``` Example code using each of the model's compiler capabilities may be found in [llm_compiler_demo.py](llm_compiler_demo.py). The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens). ```python from transformers import AutoTokenizer import transformers import torch model = "facebook/llm-compiler-13b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( '%3 = alloca i32, align 4', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the LLM Compiler family of large language models (LLMs). **Model Developers** Meta **Variations** LLM Compiler comes in two model sizes of 7B, 13B parameters in two flavors, the foundation and instruction fine-tuned for code size and disassembly. **This repository contains the 13 billion parameter foundation model.** **Input** Models input text only. **Example prompt** See `llm_compiler_demo.py` in the repo for examples of the different use cases. **Output** Models generate text only. **Model Architecture** LLM Compiler is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** LLM Compiler has been trained between January 2024 and June 2024. **Status** This is a static model trained on an offline dataset. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Meta Large Language Model Compiler: Foundation Models of Compiler Optimization](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/)". ## Intended Use **Intended Use Cases** LLM Compiler is intended for commercial and research use in English, relevant programming languages, LLVM IR, x86_64 assembly and ARM assembly. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for LLM Compiler and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all LLM Compiler models required 14K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of Code Llama. 100% of the estimated tCO2eq emissions were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Code Llama with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/llm-compiler-foundation-models-for-compiler-optimization/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations LLM Compiler and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, LLM Compilers’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of LLM Compiler, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
EleutherAI/pythia-1.4b-deduped
EleutherAI
"2023-06-08T13:03:28Z"
10,938
19
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-02-09T21:42:04Z"
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1.4B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1.4B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1.4B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1.4B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1.4B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-1.4B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
openvla/openvla-7b
openvla
"2024-06-14T01:30:27Z"
10,933
43
transformers
[ "transformers", "safetensors", "openvla", "feature-extraction", "robotics", "vla", "image-text-to-text", "multimodal", "pretraining", "custom_code", "en", "arxiv:2406.09246", "license:mit", "region:us" ]
image-text-to-text
"2024-06-10T16:35:59Z"
--- library_name: transformers tags: - robotics - vla - image-text-to-text - multimodal - pretraining license: mit language: - en pipeline_tag: image-text-to-text --- # OpenVLA 7B OpenVLA 7B (`openvla-7b`) is an open vision-language-action model trained on 970K robot manipulation episodes from the [Open X-Embodiment](https://robotics-transformer-x.github.io/) dataset. The model takes language instructions and camera images as input and generates robot actions. It supports controlling multiple robots out-of-the-box, and can be quickly adapted for new robot domains via (parameter-efficient) fine-tuning. All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License. For full details, please read [our paper](https://arxiv.org/abs/2406.09246) and see [our project page](https://openvla.github.io/). ## Model Summary - **Developed by:** The OpenVLA team consisting of researchers from Stanford, UC Berkeley, Google Deepmind, and the Toyota Research Institute. - **Model type:** Vision-language-action (language, image => robot actions) - **Language(s) (NLP):** en - **License:** MIT - **Finetuned from:** [`prism-dinosiglip-224px`](https://github.com/TRI-ML/prismatic-vlms), a VLM trained from: + **Vision Backbone**: DINOv2 ViT-L/14 and SigLIP ViT-So400M/14 + **Language Model**: Llama-2 - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla). - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla) - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://arxiv.org/abs/2406.09246) - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/) ## Uses OpenVLA models take a language instruction and a camera image of a robot workspace as input, and predict (normalized) robot actions consisting of 7-DoF end-effector deltas of the form (x, y, z, roll, pitch, yaw, gripper). To execute on an actual robot platform, actions need to be *un-normalized* subject to statistics computed on a per-robot, per-dataset basis. See [our repository](https://github.com/openvla/openvla) for more information. OpenVLA models can be used zero-shot to control robots for specific combinations of embodiments and domains seen in the Open-X pretraining mixture (e.g., for [BridgeV2 environments with a Widow-X robot](https://rail-berkeley.github.io/bridgedata/)). They can also be efficiently *fine-tuned* for new tasks and robot setups given minimal demonstration data; [see here](https://github.com/openvla/openvla/blob/main/scripts/finetune.py). **Out-of-Scope:** OpenVLA models do not zero-shot generalize to new (unseen) robot embodiments, or setups that are not represented in the pretraining mix; in these cases, we suggest collecting a dataset of demonstrations on the desired setup, and fine-tuning OpenVLA models instead. ## Getting Started OpenVLA 7B can be used to control multiple robots for domains represented in the pretraining mixture out-of-the-box. For example, here is an example for loading `openvla-7b` for zero-shot instruction following in the [BridgeV2 environments] with a Widow-X robot: ```python # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...) # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt from transformers import AutoModelForVision2Seq, AutoProcessor from PIL import Image import torch # Load Processor & VLA processor = AutoProcessor.from_pretrained("openvla/openvla-7b", trust_remote_code=True) vla = AutoModelForVision2Seq.from_pretrained( "openvla/openvla-7b", attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn` torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).to("cuda:0") # Grab image input & format prompt image: Image.Image = get_from_camera(...) prompt = "In: What action should the robot take to {<INSTRUCTION>}?\nOut:" # Predict Action (7-DoF; un-normalize for BridgeV2) inputs = processor(prompt, image).to("cuda:0", dtype=torch.bfloat16) action = vla.predict_action(**inputs, unnorm_key="bridge_orig", do_sample=False) # Execute... robot.act(action, ...) ``` For more examples, including scripts for fine-tuning OpenVLA models on your own robot demonstration datasets, see [our training repository](https://github.com/openvla/openvla). ## Citation **BibTeX:** ```bibtex @article{kim24openvla, title={OpenVLA: An Open-Source Vision-Language-Action Model}, author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn}, journal = {arXiv preprint arXiv:2406.09246}, year={2024} } ```
optimum/sbert-all-MiniLM-L6-with-pooler
optimum
"2022-07-26T13:37:30Z"
10,928
6
sentence-transformers
[ "sentence-transformers", "onnx", "bert", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-07-26T11:32:55Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # ONNX convert all-MiniLM-L6-v2 ## Conversion of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) This is a [sentence-transformers](https://www.SBERT.net) ONNX model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. This custom model takes `last_hidden_state` and `pooler_output` whereas the sentence-transformers exported with default ONNX config only contains `last_hidden_state` as output. ## Usage (HuggingFace Optimum) Using this model becomes easy when you have [optimum](https://github.com/huggingface/optimum) installed: ``` python -m pip install optimum ``` Then you can use the model like this: ```python from optimum.onnxruntime.modeling_ort import ORTModelForCustomTasks model = ORTModelForCustomTasks.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler") tokenizer = AutoTokenizer.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler") inputs = tokenizer("I love burritos!", return_tensors="pt") pred = model(**inputs) ``` You will also be able to leverage the pipeline API in transformers: ```python from transformers import pipeline onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer) text = "I love burritos!" pred = onnx_extractor(text) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
caidas/swin2SR-classical-sr-x2-64
caidas
"2024-03-27T10:32:24Z"
10,925
24
transformers
[ "transformers", "pytorch", "safetensors", "swin2sr", "image-to-image", "vision", "arxiv:2209.11345", "license:apache-2.0", "region:us" ]
image-to-image
"2022-12-16T14:05:18Z"
--- license: apache-2.0 tags: - vision - image-to-image inference: false --- # Swin2SR model (image super-resolution) Swin2SR model that upscales images x2. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr). # Intended use cases This model is intended for image super resolution. # Usage Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example).
mradermacher/llama-3-Nephilim-v2-8B-GGUF
mradermacher
"2024-06-30T22:24:04Z"
10,921
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:grimjim/llama-3-Nephilim-v2-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-30T18:28:03Z"
--- base_model: grimjim/llama-3-Nephilim-v2-8B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/grimjim/llama-3-Nephilim-v2-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Qwen/Qwen-1_8B-Chat
Qwen
"2023-12-13T15:43:38Z"
10,919
102
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "zh", "en", "arxiv:2309.16609", "arxiv:2305.08322", "arxiv:2009.03300", "autotrain_compatible", "region:us" ]
text-generation
"2023-11-30T02:56:11Z"
--- language: - zh - en tags: - qwen pipeline_tag: text-generation inference: false --- # Qwen-1.8B-Chat <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/> <p> <br> <p align="center"> 🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a> &nbsp&nbsp | &nbsp&nbsp🖥️ <a href="https://www.modelscope.cn/studios/qwen/Qwen-1_8B-Chat-Demo/summary">Demo</a> <br> <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://dashscope.aliyun.com">API</a> </p> <br> ## 介绍(Introduction) **通义千问-1.8B(Qwen-1.8B)**是阿里云研发的通义千问大模型系列的18亿参数规模的模型。Qwen-1.8B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-1.8B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-1.8B-Chat。本仓库为Qwen-1.8B-Chat的仓库。 通义千问-1.8B(Qwen-1.8B)主要有以下特点: 1. **低成本部署**:提供int8和int4量化版本,推理最低仅需不到2GB显存,生成2048 tokens仅需3GB显存占用。微调最低仅需6GB。 2. **大规模高质量训练语料**:使用超过2.2万亿tokens的数据进行预训练,包含高质量中、英、多语言、代码、数学等数据,涵盖通用及专业领域的训练语料。通过大量对比实验对预训练语料分布进行了优化。 3. **优秀的性能**:Qwen-1.8B支持8192上下文长度,在多个中英文下游评测任务上(涵盖常识推理、代码、数学、翻译等),效果显著超越现有的相近规模开源模型,具体评测结果请详见下文。 4. **覆盖更全面的词表**:相比目前以中英词表为主的开源模型,Qwen-1.8B使用了约15万大小的词表。该词表对多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强和扩展。 5. **系统指令跟随**:Qwen-1.8B-Chat可以通过调整系统指令,实现**角色扮演**,**语言风格迁移**,**任务设定**,和**行为设定**等能力。 如果您想了解更多关于通义千问1.8B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。 **Qwen-1.8B** is the 1.8B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen-1.8B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-1.8B, we release Qwen-1.8B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-1.8B-Chat. The features of Qwen-1.8B include: 1. **Low-cost deployment**: We provide int4 and int8 quantized versions, the minimum memory requirment for inference is less than 2GB, generating 2048 tokens only 3GB of memory usage. The minimum memory requirment of finetuning is only 6GB. 2. **Large-scale high-quality training corpora**: It is pretrained on over 2.2 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments. 3. **Good performance**: It supports 8192 context length and significantly surpasses existing open-source models of similar scale on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.), and even surpasses some larger-scale models in several benchmarks. See below for specific evaluation results. 4. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-1.8B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary. 5. **System prompt**: Qwen-1.8B-Chat can realize roly playing, language style transfer, task setting, and behavior setting by using system prompt. For more details about the open-source model of Qwen-1.8B-Chat, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository. <br> ## 要求(Requirements) * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项) * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) ## 依赖项(Dependency) 运行Qwen-1.8B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库 To run Qwen-1.8B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed ``` 另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。 In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage. ```bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # 下方安装可选,安装可能比较缓慢。 # pip install csrc/layer_norm # pip install csrc/rotary ``` <br> ## 快速使用(Quickstart) 下面我们展示了一个使用Qwen-1.8B-Chat模型,进行多轮对话交互的样例: We show an example of multi-turn interaction with Qwen-1.8B-Chat in the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # Note: The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-1_8B-Chat", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-1_8B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-1_8B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-1_8B-Chat", device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-1_8B-Chat", device_map="auto", trust_remote_code=True).eval() # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this. # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-1_8B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参 # 第一轮对话 1st dialogue turn response, history = model.chat(tokenizer, "你好", history=None) print(response) # 你好!很高兴为你提供帮助。 # 第二轮对话 2nd dialogue turn response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history) print(response) # 这是一个关于一个年轻人奋斗创业最终取得成功的故事。 # 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。 # 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。 # 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。 # 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。 # 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。 # 第三轮对话 3rd dialogue turn response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history) print(response) # 《奋斗创业:一个年轻人的成功之路》 # Qwen-1.8B-Chat现在可以通过调整系统指令(System Prompt),实现角色扮演,语言风格迁移,任务设定,行为设定等能力。 # Qwen-1.8B-Chat can realize roly playing, language style transfer, task setting, and behavior setting by system prompt. response, _ = model.chat(tokenizer, "你好呀", history=None, system="请用二次元可爱语气和我说话") print(response) # 你好啊!我是一只可爱的二次元猫咪哦,不知道你有什么问题需要我帮忙解答吗? response, _ = model.chat(tokenizer, "My colleague works diligently", history=None, system="You will write beautiful compliments according to needs") print(response) # Your colleague is an outstanding worker! Their dedication and hard work are truly inspiring. They always go above and beyond to ensure that # their tasks are completed on time and to the highest standard. I am lucky to have them as a colleague, and I know I can count on them to handle any challenge that comes their way. ``` 关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。 For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information. ## Tokenizer > 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。 基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。 Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md). ## 量化 (Quantization) ### 用法 (Usage) **请注意:我们更新量化方案为基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化,提供Qwen-1.8B-Chat的Int4量化模型[点击这里](https://huggingface.co/Qwen/Qwen-1_8B-Chat-Int4)。相比此前方案,该方案在模型评测效果几乎无损,且存储需求更低,推理速度更优。** **Note: we provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-1.8B-Chat [Click here](https://huggingface.co/Qwen/Qwen-1_8B-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed, in comparison with the previous solution.** 以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包: Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements of auto-gptq (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages: ```bash pip install auto-gptq optimum ``` 如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。 随后即可使用和上述一致的用法调用量化模型: If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel. Then you can load the quantized model easily and run inference as same as usual: ```python model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen-1_8B-Chat-Int4", device_map="auto", trust_remote_code=True ).eval() response, history = model.chat(tokenizer, "你好", history=None) ``` ### 效果评测 我们使用原始模型的FP32和BF16精度,以及量化过的Int8和Int4模型在基准评测上做了测试,结果如下所示: We illustrate the model performance of both FP32, BF16, Int8 and Int4 models on the benchmark. Results are shown below: | Quantization | MMLU | CEval (val) | GSM8K | Humaneval | |--------------|:----:|:-----------:|:-----:|:---------:| | FP32 | 43.4 | 57.0 | 33.0 | 26.8 | | BF16 | 43.3 | 55.6 | 33.7 | 26.2 | | Int8 | 43.1 | 55.8 | 33.0 | 27.4 | | Int4 | 42.9 | 52.8 | 31.2 | 25.0 | ### 推理速度 (Inference Speed) 我们测算了FP32、BF16精度和Int8、Int4量化模型生成2048和8192个token的平均推理速度。如图所示: We measured the average inference speed of generating 2048 and 8192 tokens under FP32, BF16 precision and Int8, Int4 quantization level, respectively. | Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) | |--------------| :-------: |:-------------------:|:-------------------:| | FP32 | v2 | 52.96 | 47.35 | | BF16 | v2 | 54.09 | 54.04 | | Int8 | v2 | 55.56 | 55.62 | | Int4 | v2 | 71.07 | 76.45 | | FP32 | v1 | 52.00 | 45.80 | | BF16 | v1 | 51.70 | 55.04 | | Int8 | v1 | 53.16 | 53.33 | | Int4 | v1 | 69.82 | 67.44 | | FP32 | Disabled | 52.28 | 44.95 | | BF16 | Disabled | 48.17 | 45.01 | | Int8 | Disabled | 52.16 | 52.99 | | Int4 | Disabled | 68.37 | 65.94 | 具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.4。推理速度是生成8192个token的速度均值。 In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4. The inference speed is averaged over the generated 8192 tokens. ### 显存使用 (GPU Memory Usage) 我们测算了FP32、BF16精度和Int8、Int4量化模型生成2048个及8192个token(单个token作为输入)的峰值显存占用情况。结果如下所示: We also profile the peak GPU memory usage for generating 2048 tokens and 8192 tokens (with single token as context) under FP32, BF16 or Int8, Int4 quantization level, respectively. The results are shown below. | Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens | |--------------------|:-----------------------------------:|:-------------------------------------:| | FP32 | 8.45GB | 13.06GB | | BF16 | 4.23GB | 6.48GB | | Int8 | 3.48GB | 5.34GB | | Int4 | 2.91GB | 4.80GB | 上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。 The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py). <br> ## 模型细节(Model) 与Qwen-1.8B预训练模型相同,Qwen-1.8B-Chat模型规模基本情况如下所示 The details of the model architecture of Qwen-1.8B-Chat are listed as follows | Hyperparameter | Value | |:----------------|:------:| | n_layers | 24 | | n_heads | 16 | | d_model | 2048 | | vocab size | 151851 | | sequence length | 8192 | 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法, 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-1.8B-Chat使用了约15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。 For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration). For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-1.8B-Chat uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization. ## 评测效果(Evaluation) 对于Qwen-1.8B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-1.8B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。 提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。 For Qwen-1.8B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage. Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible. ### 中文评测(Chinese Evaluation) #### C-Eval 在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-1.8B-Chat模型的准确率 We demonstrate the accuracy of Qwen-1.8B-Chat on C-Eval validation set | Model | Acc. | |:--------------------------------:|:---------:| | RedPajama-INCITE-Chat-3B | 18.3 | | OpenBuddy-3B | 23.5 | | Firefly-Bloom-1B4 | 23.6 | | OpenLLaMA-Chinese-3B | 24.4 | | LLaMA2-7B-Chat | 31.9 | | ChatGLM2-6B-Chat | 52.6 | | InternLM-7B-Chat | 53.6 | | **Qwen-1.8B-Chat (0-shot)** | 55.6 | | **Qwen-7B-Chat (0-shot)** | 59.7 | | **Qwen-7B-Chat (5-shot)** | 59.3 | C-Eval测试集上,Qwen-1.8B-Chat模型的zero-shot准确率结果如下: The zero-shot accuracy of Qwen-1.8B-Chat on C-Eval testing set is provided below: | Model | Avg. | STEM | Social Sciences | Humanities | Others | | :---------------------: | :------: | :--: | :-------------: | :--------: | :----: | | Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 | | Chinese-Alpaca-2-7B | 40.3 | - | - | - | - | | ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 | | Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 | | **Qwen-1.8B-Chat** | 53.8 | 48.4 | 68.0 | 56.5 | 48.3 | | **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 | ### 英文评测(English Evaluation) #### MMLU [MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-1.8B-Chat模型的准确率如下,效果同样在同类对齐模型中同样表现较优。 The accuracy of Qwen-1.8B-Chat on MMLU is provided below. The performance of Qwen-1.8B-Chat still on the top between other human-aligned models with comparable size. | Model | Acc. | |:--------------------------------:|:---------:| | Firefly-Bloom-1B4 | 23.8 | | OpenBuddy-3B | 25.5 | | RedPajama-INCITE-Chat-3B | 25.5 | | OpenLLaMA-Chinese-3B | 25.7 | | ChatGLM2-6B-Chat | 46.0 | | LLaMA2-7B-Chat | 46.2 | | InternLM-7B-Chat | 51.1 | | Baichuan2-7B-Chat | 52.9 | | **Qwen-1.8B-Chat (0-shot)** | 43.3 | | **Qwen-7B-Chat (0-shot)** | 55.8 | | **Qwen-7B-Chat (5-shot)** | 57.0 | ### 代码评测(Coding Evaluation) Qwen-1.8B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下 The zero-shot Pass@1 of Qwen-1.8B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below | Model | Pass@1 | |:------------------------:|:------:| | Firefly-Bloom-1B4 | 0.6 | | OpenLLaMA-Chinese-3B | 4.9 | | RedPajama-INCITE-Chat-3B | 6.1 | | OpenBuddy-3B | 10.4 | | ChatGLM2-6B-Chat | 11.0 | | LLaMA2-7B-Chat | 12.2 | | Baichuan2-7B-Chat | 13.4 | | InternLM-7B-Chat | 14.6 | | **Qwen-1.8B-Chat** | 26.2 | | **Qwen-7B-Chat** | 37.2 | ### 数学评测(Mathematics Evaluation) 在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-1.8B-Chat的准确率结果如下 The accuracy of Qwen-1.8B-Chat on GSM8K is shown below | Model | Acc. | |:------------------------------------:|:--------:| | Firefly-Bloom-1B4 | 2.4 | | RedPajama-INCITE-Chat-3B | 2.5 | | OpenLLaMA-Chinese-3B | 3.0 | | OpenBuddy-3B | 12.6 | | LLaMA2-7B-Chat | 26.3 | | ChatGLM2-6B-Chat | 28.8 | | Baichuan2-7B-Chat | 32.8 | | InternLM-7B-Chat | 33.0 | | **Qwen-1.8B-Chat (0-shot)** | 33.7 | | **Qwen-7B-Chat (0-shot)** | 50.3 | | **Qwen-7B-Chat (8-shot)** | 54.1 | ## 评测复现(Reproduction) 我们提供了评测脚本,方便大家复现模型效果,详见[链接](https://github.com/QwenLM/Qwen/tree/main/eval)。提示:由于硬件和框架造成的舍入误差,复现结果如有小幅波动属于正常现象。 We have provided evaluation scripts to reproduce the performance of our model, details as [link](https://github.com/QwenLM/Qwen/tree/main/eval). <br> ## FAQ 如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。 If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue. <br> ## 引用 (Citation) 如果你觉得我们的工作对你有帮助,欢迎引用! If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ``` <br> ## 使用协议(License Agreement) 我们的代码和模型权重对学术研究完全开放。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20RESEARCH%20LICENSE%20AGREEMENT)文件了解具体的开源协议细节。如需商用,请联系我们。 Our code and checkpoints are open to research purpose. Check the [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20RESEARCH%20LICENSE%20AGREEMENT) for more details about the license. For commercial use, please contact us. <br> ## 联系我们(Contact Us) 如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件([email protected])联系我们。 If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
mradermacher/Venus-120b-v1.0-i1-GGUF
mradermacher
"2024-06-26T03:51:19Z"
10,917
1
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:nsfwthrowitaway69/Venus-120b-v1.0", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-06-11T03:23:57Z"
--- base_model: nsfwthrowitaway69/Venus-120b-v1.0 language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Venus-120b-v1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 25.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 27.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 31.9 | | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 35.5 | | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 40.5 | | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 44.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 46.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 49.3 | | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 51.9 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.1 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 53.8 | | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 57.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 63.1 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 64.3 | | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 68.1 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 68.4 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 72.2 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 82.9 | | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 85.1 | | | [PART 1](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-120b-v1.0-i1-GGUF/resolve/main/Venus-120b-v1.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 98.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
KoboldAI/OPT-6B-nerys-v2
KoboldAI
"2022-07-04T07:45:47Z"
10,906
23
transformers
[ "transformers", "pytorch", "opt", "text-generation", "en", "arxiv:2205.01068", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-06-26T10:24:25Z"
--- language: en license: other commercial: no --- # OPT 6B - Nerys ## Model Description OPT 6B-Nerys is a finetune created using Facebook's OPT model. ## Training data The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset). Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` This dataset has been cleaned in the same way as fairseq-dense-13B-Nerys-v2 ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/OPT-6B-Nerys-v2') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### License OPT-6B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### BibTeX entry and citation info ``` @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
hustvl/vitmatte-base-composition-1k
hustvl
"2023-09-21T09:25:07Z"
10,900
4
transformers
[ "transformers", "pytorch", "vitmatte", "vision", "arxiv:2305.15272", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-09-10T07:56:12Z"
--- license: apache-2.0 tags: - vision --- # ViTMatte model ViTMatte model trained on Composition-1k. It was introduced in the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Yao et al. and first released in [this repository](https://github.com/hustvl/ViTMatte). Disclaimer: The team releasing ViTMatte did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ViTMatte is a simple approach to image matting, the task of accurately estimating the foreground object in an image. The model consists of a Vision Transformer (ViT) with a lightweight head on top. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitmatte_architecture.png" alt="drawing" width="600"/> <small> ViTMatte high-level overview. Taken from the <a href="https://arxiv.org/abs/2305.15272">original paper.</a> </small> ## Intended uses & limitations You can use the raw model for image matting. See the [model hub](https://huggingface.co/models?search=vitmatte) to look for other fine-tuned versions that may interest you. ### How to use We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/vitmatte#transformers.VitMatteForImageMatting.forward.example). ### BibTeX entry and citation info ```bibtex @misc{yao2023vitmatte, title={ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers}, author={Jingfeng Yao and Xinggang Wang and Shusheng Yang and Baoyuan Wang}, year={2023}, eprint={2305.15272}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k
timm
"2024-02-10T23:42:23Z"
10,897
8
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "clip", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-04-11T00:29:55Z"
--- license: mit library_name: open_clip tags: - zero-shot-image-classification - clip --- # Model card for eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k
NousResearch/OLMo-Bitnet-1B
NousResearch
"2024-04-11T17:35:02Z"
10,887
108
transformers
[ "transformers", "pytorch", "olmo", "text-generation", "custom_code", "dataset:allenai/dolma", "arxiv:2402.17764", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-30T01:39:01Z"
--- license: apache-2.0 datasets: - allenai/dolma --- # OLMo-Bitnet-1B OLMo-Bitnet-1B is a 1B parameter model trained using the method described in [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764). It was trained on the first 60B tokens of the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset, so it is merely a research proof-of-concept to test out the methodolgy. A separate training run was run with the exact same hyperparameters, but using standard fp16 weights. The comparison can be found in [this wandb report](https://api.wandb.ai/links/emozilla/evltqiv7). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/NAw-hyWJl5ihVsAPqz3Xe.png) Sample inference code ```sh pip install ai2-olmo ``` ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextStreamer tokenizer = AutoTokenizer.from_pretrained("NousResearch/OLMo-Bitnet-1B") model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") streamer = TextStreamer(tokenizer) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, pad_token_id=tokenizer.eos_token_id, temperature=0.8, repetition_penalty=1.1, do_sample=True,streamer=streamer) pipe("The capitol of Paris is", max_new_tokens=256) ``` Training was performed using [OLMo](https://github.com/allenai/OLMo).
ielab/TILDE
ielab
"2021-06-24T05:46:57Z"
10,885
1
transformers
[ "transformers", "pytorch", "bert", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
Please treat TILDE as a BertLMHeadModel model: ``` from transformers import BertLMHeadModel, BertTokenizerFast model = BertLMHeadModel.from_pretrained("ielab/TILDE") tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') ``` Github: https://github.com/ielab/TILDE
mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF
mradermacher
"2024-06-22T16:45:42Z"
10,883
0
transformers
[ "transformers", "gguf", "en", "base_model:Nitral-AI/Hathor_Stable-L3-8B-v0.5", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:17:28Z"
--- base_model: Nitral-AI/Hathor_Stable-L3-8B-v0.5 language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Nitral-AI/Hathor_Stable-L3-8B-v0.5 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
tstadel/answer-classification-setfit-v2-binary
tstadel
"2024-01-09T19:55:25Z"
10,879
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-base-en-v1.5", "region:us" ]
text-classification
"2024-01-09T19:53:30Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: [] pipeline_tag: text-classification inference: true base_model: BAAI/bge-base-en-v1.5 --- # SetFit with BAAI/bge-base-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("tstadel/answer-classification-setfit-v2-binary") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.8.17 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.36.2 - PyTorch: 2.0.1 - Datasets: 2.13.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
pysentimiento/bertweet-hate-speech
pysentimiento
"2023-02-20T19:00:46Z"
10,873
5
pysentimiento
[ "pysentimiento", "pytorch", "roberta", "twitter", "hate-speech", "en", "arxiv:2106.09462", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en library_name: pysentimiento tags: - twitter - hate-speech --- # Hate Speech detection in English ## bertweet-hate-speech Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with SemEval 2019 Task 5: HatEval (SubTask B) corpus for Hate Speech detection in English. Base model is [BERTweet](https://huggingface.co/vinai/bertweet-base), a RoBERTa model trained in English tweets. It is a multi-classifier model, with the following classes: - **HS**: is it hate speech? - **TR**: is it targeted to a specific individual? - **AG**: is it aggressive? ## License `pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses. 1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php) 2. [SEMEval 2017 Dataset license]() ## Citation If you use this model in your work, please cite the following papers: ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{nguyen2020bertweet, title={BERTweet: A pre-trained language model for English Tweets}, author={Nguyen, Dat Quoc and Vu, Thanh and Nguyen, Anh Tuan}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages={9--14}, year={2020} } @inproceedings{basile2019semeval, title={Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter}, author={Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and Pardo, Francisco Manuel Rangel and Rosso, Paolo and Sanguinetti, Manuela}, booktitle={Proceedings of the 13th international workshop on semantic evaluation}, pages={54--63}, year={2019} } ``` Enjoy! 🤗
mradermacher/Treue-GGUF
mradermacher
"2024-06-26T11:01:47Z"
10,863
0
transformers
[ "transformers", "gguf", "en", "base_model:Schwaenzli/Treue", "endpoints_compatible", "region:us" ]
null
"2024-06-26T10:22:18Z"
--- base_model: Schwaenzli/Treue language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Schwaenzli/Treue <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Treue-GGUF/resolve/main/Treue.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Morfoz-LLM-8b-v1.0-GGUF
mradermacher
"2024-06-23T19:33:46Z"
10,860
0
transformers
[ "transformers", "gguf", "tr", "base_model:Morfoz-Aigap/Morfoz-LLM-8b-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T19:05:58Z"
--- base_model: Morfoz-Aigap/Morfoz-LLM-8b-v1.0 language: - tr library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Morfoz-Aigap/Morfoz-LLM-8b-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Morfoz-LLM-8b-v1.0-GGUF/resolve/main/Morfoz-LLM-8b-v1.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
asahi417/tner-xlm-roberta-base-ontonotes5
asahi417
"2022-11-04T03:24:37Z"
10,855
4
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "en", "arxiv:2209.12616", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- language: - en --- # Model Card for XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. # Model Details ## Model Description XLM-RoBERTa finetuned on NER. - **Developed by:** Asahi Ushio - **Shared by [Optional]:** Hugging Face - **Model type:** Token Classification - **Language(s) (NLP):** en - **License:** More information needed - **Related Models:** XLM-RoBERTa - **Parent Model:** XLM-RoBERTa - **Resources for more information:** - [GitHub Repo](https://github.com/asahi417/tner) - [Associated Paper](https://arxiv.org/abs/2209.12616) - [Space](https://huggingface.co/spaces/akdeniz27/turkish-named-entity-recognition) # Uses ## Direct Use Token Classification ## Downstream Use [Optional] This model can be used in conjunction with the [tner library](https://github.com/asahi417/tner). ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations. # Training Details ## Training Data An NER dataset contains a sequence of tokens and tags for each split (usually `train`/`validation`/`test`), ```python { 'train': { 'tokens': [ ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'], ['From', 'Green', 'Newsfeed', ':', 'AHFA', 'extends', 'deadline', 'for', 'Sage', 'Award', 'to', 'Nov', '.', '5', 'http://tinyurl.com/24agj38'], ... ], 'tags': [ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... ] }, 'validation': ..., 'test': ..., } ``` with a dictionary to map a label to its index (`label2id`) as below. ```python {"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8} ``` ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times **Layer_norm_eps:** 1e-05, **Num_attention_heads:** 12, **Num_hidden_layers:** 12, **Vocab_size:** 250002 # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See [dataset card](https://github.com/asahi417/tner/blob/master/DATASET_CARD.md) for full dataset lists ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.eacl-demos.7", pages = "53--62", } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Asahi Ushio in collaboration with Ezi Ozoani and the Hugging Face team. # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5") ``` </details>
TheBloke/CodeLlama-7B-Instruct-GGUF
TheBloke
"2023-09-27T12:46:02Z"
10,840
109
transformers
[ "transformers", "gguf", "llama", "llama-2", "text-generation", "code", "arxiv:2308.12950", "base_model:codellama/CodeLlama-7b-instruct-hf", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-08-24T17:01:14Z"
--- language: - code license: llama2 tags: - llama-2 model_name: CodeLlama 7B Instruct base_model: codellama/CodeLlama-7b-instruct-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 7B Instruct - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeLlama ``` [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codellama-7b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [codellama-7b-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [codellama-7b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [codellama-7b-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [codellama-7b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codellama-7b-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [codellama-7b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [codellama-7b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codellama-7b-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [codellama-7b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [codellama-7b-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [codellama-7b-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/blob/main/codellama-7b-instruct.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeLlama-7B-Instruct-GGUF and below it, a specific filename to download, such as: codellama-7b-instruct.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeLlama-7B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codellama-7b-instruct.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:\n{prompt}\n[/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeLlama-7B-Instruct-GGUF", model_file="codellama-7b-instruct.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's CodeLlama 7B Instruct # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 7B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide). <!-- original-model-card end -->
RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf
RichardErkhov
"2024-06-25T13:54:40Z"
10,833
0
null
[ "gguf", "arxiv:2404.16792", "region:us" ]
null
"2024-06-25T10:16:53Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Starling-LM-7B-alpha-ExPO - GGUF - Model creator: https://huggingface.co/chujiezheng/ - Original model: https://huggingface.co/chujiezheng/Starling-LM-7B-alpha-ExPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Starling-LM-7B-alpha-ExPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q2_K.gguf) | Q2_K | 2.53GB | | [Starling-LM-7B-alpha-ExPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Starling-LM-7B-alpha-ExPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Starling-LM-7B-alpha-ExPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Starling-LM-7B-alpha-ExPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Starling-LM-7B-alpha-ExPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q3_K.gguf) | Q3_K | 3.28GB | | [Starling-LM-7B-alpha-ExPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Starling-LM-7B-alpha-ExPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Starling-LM-7B-alpha-ExPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Starling-LM-7B-alpha-ExPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q4_0.gguf) | Q4_0 | 3.83GB | | [Starling-LM-7B-alpha-ExPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Starling-LM-7B-alpha-ExPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Starling-LM-7B-alpha-ExPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q4_K.gguf) | Q4_K | 4.07GB | | [Starling-LM-7B-alpha-ExPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Starling-LM-7B-alpha-ExPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q4_1.gguf) | Q4_1 | 4.24GB | | [Starling-LM-7B-alpha-ExPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q5_0.gguf) | Q5_0 | 4.65GB | | [Starling-LM-7B-alpha-ExPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Starling-LM-7B-alpha-ExPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q5_K.gguf) | Q5_K | 4.78GB | | [Starling-LM-7B-alpha-ExPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Starling-LM-7B-alpha-ExPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q5_1.gguf) | Q5_1 | 5.07GB | | [Starling-LM-7B-alpha-ExPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q6_K.gguf) | Q6_K | 5.53GB | | [Starling-LM-7B-alpha-ExPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/chujiezheng_-_Starling-LM-7B-alpha-ExPO-gguf/blob/main/Starling-LM-7B-alpha-ExPO.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en --- # Starling-LM-7B-alpha-ExPO The extrapolated (ExPO) model based on [`berkeley-nest/Starling-LM-7B-alpha`](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [`openchat/openchat_3.5`](https://huggingface.co/openchat/openchat_3.5), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. Specifically, we obtain this model by extrapolating **(alpha = 0.2)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. ## Evaluation Results Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)): | | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | | ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** | | `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** | | `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** | | `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** | | `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** | | `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** | | `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** | | `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** | | `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** | | `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** | | `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** | Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)): | | Original | + ExPO | | ------------------------------------ | -------- | -------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** | | `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** | | `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** | | `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** | | `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** | | `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** | | `internlm/internlm2-chat-7b` | 7.72 | **7.80** | | `internlm/internlm2-chat-20b` | 8.13 | **8.26** | | `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** | | `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** | | `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
lmstudio-community/Phi-3-mini-4k-instruct-GGUF
lmstudio-community
"2024-04-30T05:05:11Z"
10,831
19
null
[ "gguf", "nlp", "code", "text-generation", "en", "arxiv:2404.14219", "license:mit", "region:us" ]
text-generation
"2024-04-24T22:08:51Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code quantized_by: bartowski lm_studio: param_count: 4b use_case: chat release_date: 23-04-2024 model_creator: Microsoft prompt_template: Phi 3 system_prompt: You are a helpful AI assistant. base_model: Phi original_repo: microsoft/Phi-3-mini-4k-instruct --- ## 💫 Community Model> Phi-3 mini 4k instruct by Microsoft *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Microsoft](https://huggingface.co/microsoft)<br> **Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2717](https://github.com/ggerganov/llama.cpp/releases/tag/b2717)<br> ## Model Summary: Phi-3 Mini 4k instruct is Microsoft's 3.8B parameter instruct tuned model based on their brand new Phi-3 dataset.<br> Phi-3 models are unique in that a large portion of their training data is sourced from purely synthetic generation from GPT-3.5, with additional filtered publicly available data.<br> These models were tuned with common sense, language understanding, math, code, long conteext, and logical reasoning as the primary focus. ## Prompt template: Choose the `Phi 3` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <s><|system|> You are a helpful AI assistant.<|end|><|user|> {prompt}<|end|><|assistant|> ``` ## Use case and examples Phi-3 is a great model for anyone who wants a powerful but extremely lightweight model. Especially compressed, this model can run on extremely low power hardware while still exceeding models significantly larger in most tasks. ### Creativity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/GHTlwtnk_FB8h-aUNGuxu.png) ### Coding ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/h81SOS5oZq0TyZqBDERwz.png) ## Logic solving ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/ANSr350GRpNs0CDnHK6zM.png) ## Technical Details Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. Context length: 4K tokens This model was trained on 3.3T tokens of data and is a combination: - publicly available documents that were filtered rigorously for quality, selected high-quality educational data, and code - Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.) - High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Find more details on the arXiv report [here](https://arxiv.org/abs/2404.14219) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size! ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ielab/TILDEv2-TILDE200-exp
ielab
"2021-10-31T13:50:55Z"
10,830
0
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
TILDEv2 trained with passages expand with TILDE (m=200)
gaianet/llm-compiler-13b-ftd-GGUF
gaianet
"2024-06-29T12:53:10Z"
10,830
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "code", "base_model:facebook/llm-compiler-13b-ftd", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-29T11:00:57Z"
--- language: - code license: other model_name: llm-compiler-13b-ftd base_model: facebook/llm-compiler-13b-ftd inference: false model_creator: facebook quantized_by: Second State Inc. --- ![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee) # llm-compiler-13b-ftd-GGUF ## Original Model [facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Run with Gaianet (coming soon) <!-- **Prompt template:** prompt template: `gemma-instruct` **Context size:** chat_ctx_size: `8192` --> **Run with GaiaNet:** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize *Quantized with llama.cpp b3259*
THUDM/chatglm3-6b-base
THUDM
"2023-11-13T07:43:39Z"
10,828
86
transformers
[ "transformers", "pytorch", "chatglm", "glm", "thudm", "custom_code", "zh", "en", "arxiv:2103.10360", "arxiv:2210.02414", "endpoints_compatible", "region:us" ]
null
"2023-10-26T09:34:43Z"
--- language: - zh - en tags: - glm - chatglm - thudm --- # ChatGLM3-6B-Base <p align="center"> 💻 <a href="https://github.com/THUDM/ChatGLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-25ti5uohv-A_hs~am_D3Q8XPZMpj7wwQ" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM/blob/main/resources/WECHAT.md" target="_blank">WeChat</a> </p> <p align="center"> 📍Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a> </p> ## 介绍 (Introduction) ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性: 1. **更强大的基础模型:** ChatGLM3-6B 的基础模型 ChatGLM3-6B-Base 采用了更多样的训练数据、更充分的训练步数和更合理的训练策略。在语义、数学、推理、代码、知识等不同角度的数据集上测评显示,ChatGLM3-6B-Base 具有在 10B 以下的预训练模型中最强的性能。 2. **更完整的功能支持:** ChatGLM3-6B 采用了全新设计的 [Prompt 格式](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md),除正常的多轮对话外。同时原生支持[工具调用](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README.md)(Function Call)、代码执行(Code Interpreter)和 Agent 任务等复杂场景。 3. **更全面的开源序列:** 除了对话模型 ChatGLM3-6B 外,还开源了基础模型 ChatGLM-6B-Base、长文本对话模型 ChatGLM3-6B-32K。以上所有权重对学术研究**完全开放**,在填写[问卷](https://open.bigmodel.cn/mla/form)进行登记后**亦允许免费商业使用**。 本仓库为 ChatGLM3-6B 的基础模型 ChatGLM3-6B-Base。 ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B introduces the following features: 1. **More Powerful Base Model:** The base model of ChatGLM3-6B, ChatGLM3-6B-Base, employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. Evaluations on datasets such as semantics, mathematics, reasoning, code, knowledge, etc., show that ChatGLM3-6B-Base has the strongest performance among pre-trained models under 10B. 2. **More Comprehensive Function Support:** ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. It also natively supports [function call](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README_en.md), code interpreter, and complex scenarios such as agent tasks. 3. **More Comprehensive Open-source Series:** In addition to the dialogue model ChatGLM3-6B, the base model ChatGLM-6B-Base and the long-text dialogue model ChatGLM3-6B-32K are also open-sourced. All the weights are **fully open** for academic research, and after completing the [questionnaire](https://open.bigmodel.cn/mla/form) registration, they are also **allowed for free commercial use**. This repo is ChatGLM3-6B-Base, the base model of ChatGLM3-6B. ## 软件依赖 (Dependencies) ```shell pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate ``` ## 代码调用 (Code Usage) 作为没有经过人类意图对齐的模型,ChatGLM3-6B-Base 不能用于多轮对话。但是可以进行文本续写。 As a model that has not been aligned with human intent, ChatGLM3-6B-Base cannot be used for multi-turn conversations. However, text completion is possible. ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b-base", trust_remote_code=True) model = AutoModel.from_pretrained("THUDM/chatglm3-6b-base", trust_remote_code=True).half().cuda() inputs = tokenizer(["今天天气真不错"], return_tensors="pt").to('cuda') outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0].tolist())) ``` 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM)。 For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM). ## 协议 (License) 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM3-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。 The code in this repository is open-sourced under the [Apache-2.0 license](LICENSE), while the use of the ChatGLM3-6B model weights needs to comply with the [Model License](MODEL_LICENSE). ## 引用 (Citation) 如果你觉得我们的工作有帮助的话,请考虑引用下列论文。 If you find our work helpful, please consider citing the following papers. ``` @article{zeng2022glm, title={Glm-130b: An open bilingual pre-trained model}, author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others}, journal={arXiv preprint arXiv:2210.02414}, year={2022} } ``` ``` @inproceedings{du2022glm, title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling}, author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={320--335}, year={2022} } ```
mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF
mradermacher
"2024-06-25T07:02:28Z"
10,822
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.3-mistral-7B-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-25T04:10:32Z"
--- base_model: cognitivecomputations/dolphin-2.9.3-mistral-7B-32k datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - generated_from_trainer - axolotl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.3-mistral-7B-32k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF
mradermacher
"2024-06-22T17:57:26Z"
10,821
0
transformers
[ "transformers", "gguf", "en", "base_model:Nitral-AI/Hathor_Stable-L3-8B-v0.5", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:42:43Z"
--- base_model: Nitral-AI/Hathor_Stable-L3-8B-v0.5 language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Nitral-AI/Hathor_Stable-L3-8B-v0.5 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hathor_Stable-L3-8B-v0.5-i1-GGUF/resolve/main/Hathor_Stable-L3-8B-v0.5.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
hakurei/lit-6B
hakurei
"2021-11-08T23:02:41Z"
10,820
63
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: - en tags: - pytorch - causal-lm license: mit --- # Lit-6B - A Large Fine-tuned Model For Fictional Storytelling Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text. ## Model Description The model used for fine-tuning is [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax), which is a 6 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/). ## Training Data & Annotative Prompting The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations. ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ] *** When a traveler in north central Massachusetts takes the wrong fork... ``` The annotations can be mixed and matched to help generate towards a specific style. ## Downstream Uses This model can be used for entertainment purposes and as a creative writing assistant for fiction writers. ## Example Code ``` from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained('hakurei/lit-6B') tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-6B') prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler''' input_ids = tokenizer.encode(prompt, return_tensors='pt') output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id) generated_text = tokenizer.decode(output[0]) print(generated_text) ``` An example output from this code produces a result that will look similar to: ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper ``` ## Team members and Acknowledgements This project would not have been possible without the computational resources graciously provided by the [TPU Research Cloud](https://sites.research.google/trc/) - [Anthony Mercurio](https://github.com/harubaru) - Imperishable_NEET
mradermacher/catgirl_base_cpt-GGUF
mradermacher
"2024-06-19T19:55:03Z"
10,819
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:k8tems/catgirl_base_cpt", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-19T18:07:56Z"
--- base_model: k8tems/catgirl_base_cpt language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/k8tems/catgirl_base_cpt <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q2_K.gguf) | Q2_K | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.IQ3_S.gguf) | IQ3_S | 3.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q3_K_S.gguf) | Q3_K_S | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.IQ3_M.gguf) | IQ3_M | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.IQ4_XS.gguf) | IQ4_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q6_K.gguf) | Q6_K | 5.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/catgirl_base_cpt-GGUF/resolve/main/catgirl_base_cpt.f16.gguf) | f16 | 13.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/OpenCAI-8B-V2-i1-GGUF
mradermacher
"2024-06-20T14:15:19Z"
10,819
2
transformers
[ "transformers", "gguf", "art", "not-for-all-audiences", "en", "dataset:Norquinal/OpenCAI", "base_model:Norquinal/OpenCAI-8B-V2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T11:09:19Z"
--- base_model: Norquinal/OpenCAI-8B-V2 datasets: Norquinal/OpenCAI language: en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - art - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Norquinal/OpenCAI-8B-V2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF/resolve/main/OpenCAI-8B-V2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
facebook/deit-base-distilled-patch16-384
facebook
"2023-09-12T20:40:32Z"
10,813
5
transformers
[ "transformers", "pytorch", "tf", "safetensors", "deit", "image-classification", "vision", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet --- # Distilled Data-efficient Image Transformer (base-sized model) Distilled data-efficient Image Transformer (DeiT) model pre-trained at resolution 224x224 and fine-tuned at resolution 384x384 on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-384') model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |-------------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | **DeiT-base distilled 384 (1000 epochs)** | **85.2** | **97.2** | **88M** | **https://huggingface.co/facebook/deit-base-distilled-patch16-384** | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
kandinsky-community/kandinsky-2-1
kandinsky-community
"2023-10-09T11:33:20Z"
10,807
35
diffusers
[ "diffusers", "safetensors", "text-to-image", "kandinsky", "license:apache-2.0", "diffusers:KandinskyPipeline", "region:us" ]
text-to-image
"2023-05-24T09:52:07Z"
--- license: apache-2.0 prior: - kandinsky-community/kandinsky-2-1-prior tags: - text-to-image - kandinsky inference: false --- # Kandinsky 2.1 Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas. It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov) ## Usage Kandinsky 2.1 is available in diffusers! ```python pip install diffusers transformers accelerate ``` ### Text to image ```python from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" negative_prompt = "low quality, bad quality" image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0] image.save("cheeseburger_monster.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/cheeseburger.png) ### Text Guided Image-to-Image Generation ```python from diffusers import AutoPipelineForImage2Image import torch import requests from io import BytesIO from PIL import Image import os pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) original_image = Image.open(BytesIO(response.content)).convert("RGB") original_image.thumbnail((768, 768)) image = pipe(prompt=prompt, image=original_image, strength=0.3).images[0] out.images[0].save("fantasy_land.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/img2img_fantasyland.png) ### Interpolate ```python from diffusers import KandinskyPriorPipeline, KandinskyPipeline from diffusers.utils import load_image import PIL import torch pipe_prior = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ) pipe_prior.to("cuda") img1 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" ) img2 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg" ) # add all the conditions we want to interpolate, can be either text or image images_texts = ["a cat", img1, img2] # specify the weights for each condition in images_texts weights = [0.3, 0.3, 0.4] # We can leave the prompt empty prompt = "" prior_out = pipe_prior.interpolate(images_texts, weights) pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) pipe.to("cuda") image = pipe(prompt, **prior_out, height=768, width=768).images[0] image.save("starry_cat.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png) ## Model Architecture ### Overview Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder. The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation. <p float="left"> <img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/> </p> Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image. ### Details The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution). The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources. ### Evaluation We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID. FID metric values ​​for generative models on COCO_30k | | FID (30k)| |:------|----:| | eDiff-I (2022) | 6.95 | | Image (2022) | 7.27 | | Kandinsky 2.1 (2023) | 8.21| | Stable Diffusion 2.1 (2022) | 8.59 | | GigaGAN, 512x512 (2023) | 9.09 | | DALL-E 2 (2022) | 10.39 | | GLIDE (2022) | 12.24 | | Kandinsky 1.0 (2022) | 15.40 | | DALL-E (2021) | 17.89 | | Kandinsky 2.0 (2022) | 20.00 | | GLIGEN (2022) | 21.04 | For more information, please refer to the upcoming technical report. ## BibTex If you find this repository useful in your research, please cite: ``` @misc{kandinsky 2.1, title = {kandinsky 2.1}, author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov}, year = {2023}, howpublished = {}, } ```
PrunaAI/ajibawa-2023-Code-Mistral-7B-GGUF-smashed
PrunaAI
"2024-07-01T15:42:25Z"
10,802
0
null
[ "gguf", "pruna-ai", "region:us" ]
null
"2024-07-01T14:56:53Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu) ## This repo contains GGUF versions of the ajibawa-2023/Code-Mistral-7B model. # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: ajibawa-2023-Code-Mistral-7B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download ajibawa-2023-Code-Mistral-7B-GGUF-smashed Code-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download ajibawa-2023-Code-Mistral-7B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download ajibawa-2023-Code-Mistral-7B-GGUF-smashed Code-Mistral-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Code-Mistral-7B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Code-Mistral-7B.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {{prompt}} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Code-Mistral-7B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {{"role": "system", "content": "You are a story writing assistant."}}, {{ "role": "user", "content": "Write a story about llamas." }} ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Exscientia/IgBert_unpaired
Exscientia
"2024-06-19T16:05:10Z"
10,800
2
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "antibody language model", "antibody", "protein language model", "arxiv:2403.17889", "base_model:Rostlab/prot_bert_bfd", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-03-26T15:43:49Z"
--- tags: - antibody language model - antibody - protein language model base_model: Rostlab/prot_bert_bfd license: mit --- # IgBert unpaired model Model pretrained on protein and antibody sequences using a masked language modeling (MLM) objective. It was introduced in the paper [Large scale paired antibody language models](https://arxiv.org/abs/2403.17889). The model is finetuned from ProtBert-BFD using unpaired antibody sequences from the [Observed Antibody Space](https://opig.stats.ox.ac.uk/webapps/oas/). # Use The model and tokeniser can be loaded using the `transformers` library ```python from transformers import BertModel, BertTokenizer tokeniser = BertTokenizer.from_pretrained("Exscientia/IgBert_unpaired", do_lower_case=False) model = BertModel.from_pretrained("Exscientia/IgBert_unpaired", add_pooling_layer=False) ``` The tokeniser is used to prepare batch inputs ```python # single chain sequences sequences = [ "EVVMTQSPASLSVSPGERATLSCRARASLGISTDLAWYQQRPGQAPRLLIYGASTRATGIPARFSGSGSGTEFTLTISSLQSEDSAVYYCQQYSNWPLTFGGGTKVEIK", "ALTQPASVSGSPGQSITISCTGTSSDVGGYNYVSWYQQHPGKAPKLMIYDVSKRPSGVSNRFSGSKSGNTASLTISGLQSEDEADYYCNSLTSISTWVFGGGTKLTVL" ] # The tokeniser expects input of the form ["E V V M...", "A L T Q..."] sequences = [' '.join(sequence) for sequence in sequences] tokens = tokeniser.batch_encode_plus( sequences, add_special_tokens=True, pad_to_max_length=True, return_tensors="pt", return_special_tokens_mask=True ) ``` Note that the tokeniser adds a `[CLS]` token at the beginning of each sequence, a `[SEP]` token at the end of each sequence and pads using the `[PAD]` token. For example a batch containing sequences `E V V M`, `A L` will be tokenised to `[CLS] E V V M [SEP]` and `[CLS] A L [SEP] [PAD] [PAD]`. Sequence embeddings are generated by feeding tokens through the model ```python output = model( input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask'] ) residue_embeddings = output.last_hidden_state ``` To obtain a sequence representation, the residue tokens can be averaged over like so ```python import torch # mask special tokens before summing over embeddings residue_embeddings[tokens["special_tokens_mask"] == 1] = 0 sequence_embeddings_sum = residue_embeddings.sum(1) # average embedding by dividing sum by sequence lengths sequence_lengths = torch.sum(tokens["special_tokens_mask"] == 0, dim=1) sequence_embeddings = sequence_embeddings_sum / sequence_lengths.unsqueeze(1) ``` For sequence level fine-tuning the model can be loaded with a pooling head by setting `add_pooling_layer=True` and using `output.pooler_output` in the down-stream task.
mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF
mradermacher
"2024-06-20T17:49:25Z"
10,800
0
transformers
[ "transformers", "gguf", "text-generation", "sft", "llama", "llama-3", "unsloth", "id", "en", "dataset:genesist-logs", "base_model:dwikitheduck/Genesist-8B-EarlyPrototype-0.3", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-20T17:21:45Z"
--- base_model: dwikitheduck/Genesist-8B-EarlyPrototype-0.3 datasets: - genesist-logs language: - id - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - text-generation - sft - llama - llama-3 - unsloth --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dwikitheduck/Genesist-8B-EarlyPrototype-0.3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Genesist-8B-EarlyPrototype-0.3-GGUF/resolve/main/Genesist-8B-EarlyPrototype-0.3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jinaai/jina-reranker-v1-turbo-en
jinaai
"2024-06-20T06:50:44Z"
10,793
42
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "bert", "feature-extraction", "reranker", "cross-encoder", "transformers.js", "text-classification", "custom_code", "en", "arxiv:2310.19923", "arxiv:2108.12409", "license:apache-2.0", "region:eu" ]
text-classification
"2024-04-15T05:45:01Z"
--- library_name: transformers license: apache-2.0 language: - en tags: - reranker - cross-encoder - transformers.js pipeline_tag: text-classification --- <br><br> <p align="center"> <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> # jina-reranker-v1-turbo-en This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-turbo-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens. To achieve the remarkable speed, the `jina-reranker-v1-turbo-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time. Here's a breakdown of the reranker models we provide: | Model Name | Layers | Hidden Size | Parameters (Millions) | | ------------------------------------------------------------------------------------ | ------ | ----------- | --------------------- | | [jina-reranker-v1-base-en](https://jina.ai/reranker/) | 12 | 768 | 137.0 | | [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 | | [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 | > Currently, the `jina-reranker-v1-base-en` model is not available on Hugging Face. You can access it via the [Jina AI Reranker API](https://jina.ai/reranker/). As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial. # Usage 1. The easiest way to starting using `jina-reranker-v1-turbo-en` is to use Jina AI's [Reranker API](https://jina.ai/reranker/). ```bash curl https://api.jina.ai/v1/rerank \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "model": "jina-reranker-v1-turbo-en", "query": "Organic skincare products for sensitive skin", "documents": [ "Eco-friendly kitchenware for modern homes", "Biodegradable cleaning supplies for eco-conscious consumers", "Organic cotton baby clothes for sensitive skin", "Natural organic skincare range for sensitive skin", "Tech gadgets for smart homes: 2024 edition", "Sustainable gardening tools and compost solutions", "Sensitive skin-friendly facial cleansers and toners", "Organic food wraps and storage solutions", "All-natural pet food for dogs with allergies", "Yoga mats made from recycled materials" ], "top_n": 3 }' ``` 2. Alternatively, you can use the latest version of the `sentence-transformers>=0.27.0` library. You can install it via pip: ```bash pip install -U sentence-transformers ``` Then, you can use the following code to interact with the model: ```python from sentence_transformers import CrossEncoder # Load the model, here we use our turbo sized model model = CrossEncoder("jinaai/jina-reranker-v1-turbo-en", trust_remote_code=True) # Example query and documents query = "Organic skincare products for sensitive skin" documents = [ "Eco-friendly kitchenware for modern homes", "Biodegradable cleaning supplies for eco-conscious consumers", "Organic cotton baby clothes for sensitive skin", "Natural organic skincare range for sensitive skin", "Tech gadgets for smart homes: 2024 edition", "Sustainable gardening tools and compost solutions", "Sensitive skin-friendly facial cleansers and toners", "Organic food wraps and storage solutions", "All-natural pet food for dogs with allergies", "Yoga mats made from recycled materials" ] results = model.rank(query, documents, return_documents=True, top_k=3) ``` 3. You can also use the `transformers` library to interact with the model programmatically. ```python !pip install transformers from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained( 'jinaai/jina-reranker-v1-turbo-en', num_labels=1, trust_remote_code=True ) # Example query and documents query = "Organic skincare products for sensitive skin" documents = [ "Eco-friendly kitchenware for modern homes", "Biodegradable cleaning supplies for eco-conscious consumers", "Organic cotton baby clothes for sensitive skin", "Natural organic skincare range for sensitive skin", "Tech gadgets for smart homes: 2024 edition", "Sustainable gardening tools and compost solutions", "Sensitive skin-friendly facial cleansers and toners", "Organic food wraps and storage solutions", "All-natural pet food for dogs with allergies", "Yoga mats made from recycled materials" ] # construct sentence pairs sentence_pairs = [[query, doc] for doc in documents] scores = model.compute_score(sentence_pairs) ``` 4. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)! If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` Then, you can use the following code to interact with the model: ```js import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers'; const model_id = 'jinaai/jina-reranker-v1-turbo-en'; const model = await AutoModelForSequenceClassification.from_pretrained(model_id, { quantized: false }); const tokenizer = await AutoTokenizer.from_pretrained(model_id); /** * Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores. * @param {string} query A single query * @param {string[]} documents A list of documents * @param {Object} options Options for ranking * @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned. * @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores. */ async function rank(query, documents, { top_k = undefined, return_documents = false, } = {}) { const inputs = tokenizer( new Array(documents.length).fill(query), { text_pair: documents, padding: true, truncation: true } ) const { logits } = await model(inputs); return logits.sigmoid().tolist() .map(([score], i) => ({ corpus_id: i, score, ...(return_documents ? { text: documents[i] } : {}) })).sort((a, b) => b.score - a.score).slice(0, top_k); } // Example usage: const query = "Organic skincare products for sensitive skin" const documents = [ "Eco-friendly kitchenware for modern homes", "Biodegradable cleaning supplies for eco-conscious consumers", "Organic cotton baby clothes for sensitive skin", "Natural organic skincare range for sensitive skin", "Tech gadgets for smart homes: 2024 edition", "Sustainable gardening tools and compost solutions", "Sensitive skin-friendly facial cleansers and toners", "Organic food wraps and storage solutions", "All-natural pet food for dogs with allergies", "Yoga mats made from recycled materials", ] const results = await rank(query, documents, { return_documents: true, top_k: 3 }); console.log(results); ``` That's it! You can now use the `jina-reranker-v1-turbo-en` model in your projects. # Evaluation We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance. | Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) | | ------------------------------------------- | -------------------------- | ------------------------- | ------------------------- | | `jina-reranker-v1-base-en` | **52.45** | **87.31** | **85.53** | | `jina-reranker-v1-turbo-en` (you are here) | **49.60** | **69.21** | **85.13** | | `jina-reranker-v1-tiny-en` | **48.54** | **70.29** | **85.00** | | `mxbai-rerank-base-v1` | 49.19 | - | 82.50 | | `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 | | `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 | | `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 | | `bge-reranker-base` | 47.89 | - | 83.03 | **Note:** - `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results. - The results of LoCo datasets on other models are not available since they **do not support** long documents more than 512 tokens. For more details, please refer to our [benchmarking sheets](https://docs.google.com/spreadsheets/d/1V8pZjENdBBqrKMzZzOWc2aL60wtnR0yrEBY3urfO5P4/edit?usp=sharing). # Contact Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.
mradermacher/Llama-3-RedMagic2-8B-GGUF
mradermacher
"2024-06-19T19:00:42Z"
10,790
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:lemon07r/Llama-3-RedMagic2-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-19T17:01:53Z"
--- base_model: lemon07r/Llama-3-RedMagic2-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/lemon07r/Llama-3-RedMagic2-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-RedMagic2-8B-GGUF/resolve/main/Llama-3-RedMagic2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/llm-compiler-13b-ftd-GGUF
QuantFactory
"2024-06-28T14:25:23Z"
10,789
1
null
[ "gguf", "text-generation", "base_model:facebook/llm-compiler-13b-ftd", "license:other", "region:us" ]
text-generation
"2024-06-28T12:55:41Z"
--- license: other base_model: facebook/llm-compiler-13b-ftd pipeline_tag: text-generation --- # QuantFactory/llm-compiler-13b-ftd-GGUF This is quantized version of [facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) created using llama.cpp The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). **Notice :** LLM Compiler is licensed under the LLM Compiler License, Copyright © Meta Platforms, Inc. All Rights Reserved. # Introducing Meta Large Language Model Compiler (LLM Compiler), a state-of-the-art LLM for compiler optimization ## Takeaways * LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning. * LLM Compiler is free for both research and commercial use. * LLM Compiler is available in two flavors: * _LLM Compiler_, the foundational models, pretrained on over 500B tokens of LLVM-IR, x86_84, ARM, and CUDA assembly codes and trained to predict the effect of LLVM optimizations; * and _LLM Compiler FTD_, which is further fine-tuned to predict the best optimizations for code in LLVM assembly to reduce code size, and to disassemble assembly code to LLVM-IR. * LLM Compiler demonstrates far stronger understanding of compiler optimizations than existing publicly available LLMs, perfectly emulating the compiler 20% of the time. * LLM Compiler FTD sets state-of-the-art results on the tasks of optimization for code size and disassembly. It achieves a 5.24% code size improvement over -Oz vs GPT-4 Turbo 0.03%, and 0.96 round-trip BLEU score on disassembly vs GPT-4 Turbo 0.43. --- LINKS * [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/) * Download the LLM Compiler and LLM Compiler FTD models: * [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) * [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) * [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) * [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) --- We are excited to announce the release of LLM Compiler, a model targeted at code and compiler optimization tasks. LLM Compiler is built on top of our state-of-the-art large language model, Code Llama, adding capabilities to better understand compiler intermediate representations, assembly language and optimization. LLM Compiler is demonstrated on two difficult tasks: optimizing for code size and decompiling from assembly to the compiler’s intermediate representation. We release these foundation models to accelerate the application of LLMs for code optimization tasks and to enhance developer experience. We are releasing LLM Compiler under the [LLM Compiler License Agreement](LICENSE.pdf), which incorporates the [Acceptable Use Policy]([https://llama.meta.com/llama3/use-policy]) for Llama Materials. ## How LLM Compiler works LLM Compiler is a specialization of Code Llama. It is a cutting-edge tool designed to optimize code using deep learning. LLM Compiler has been pre-trained on a vast amount of LLVM assembly (IR), x86_64, ARM, and CUDA assembly codes. LLM Compiler can predict, given a piece of LLVM assembly and a sequence of optimization passes for `opt`, the LLVM optimizer, what the change in code size will be and what the output code will look like after applying these optimizations. It has ‘understood’ the behavior of the optimizing compiler to such a degree that in many cases it can perfectly replicate its output. These capabilities make it ideally suited to compiler optimization tasks. ![Compiler emulation](readme/emulate.png) In addition to this core functionality and to demonstrate its ability to solve complex compiler optimization problems, LLM Compiler has been fine-tuned for two specific downstream tasks: 1. Predicting the best optimization passes for `opt` to use in order to minimize code size, given a piece of LLVM assembly code. \ ![Autotuning](readme/autotune.png) 2. Generating LLVM IR from a piece of x86_64 or ARM assembly code. \ ![Disassemble](readme/disassemble.png) We are releasing LLM Compiler models in two sizes: 7B and 13B parameters. The models have been trained with a context window of 16,000 tokens. The two models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU and is more suitable for tasks that require low latency, like fine grained optimisation. The 13B model returns the best results. When using the LLM Compiler models, users must abide by our license and acceptable use policy. ![Training](readme/training.png) ## LLM Compiler performance We tested the performance of LLM Compiler models for emulating compiler transformations, predicting optimal pass lists and decompiling intermediate representation on hold out test sets and compared them to Code Llama and GPT-4. We compare LLM Compiler Foundation to Code Llama Base and LLM Compiler FTD to Code Llama Instruct. We evaluate LLM Compiler's ability to emulate compiler optimizations by giving it samples of unoptimized intermediate representation and a randomly generated list of optimizations. We then ask the model to generate the corresponding IR after the optimizations have been applied. In the table below we report the model's accuracy in reproducing the IR we would get from running _opt_. With very little knowledge of IR, Code Llama is unable to achieve high values while the LLM Compiler can generate character-by-character matches of expected assembly in 20% of the cases. <table> <tr> <td>Model </td> <td>Size </td> <td>Accuracy at emulating compiler optimizations </td> </tr> <tr> <td>Code Llama </td> <td>7B </td> <td>1.2% </td> </tr> <tr> <td>Code Llama </td> <td>13B </td> <td>0.8% </td> </tr> <tr> <td>LLM Compiler </td> <td>7B </td> <td>16% </td> </tr> <tr> <td>LLM Compiler </td> <td>13B </td> <td><strong>20%</strong> </td> </tr> </table> In a similar approach we evaluate our model's ability to optimize IR for code size. In this instance, however, we let the model generate the pass list that is to be used on a given unoptimized IR. We then use this pass list to optimize the particular program using _opt_ and record the binary size. The baseline is the binary size of the program when optimized using -Oz. Only LLM Compiler FTD models provide an improvement over -Oz, with the 13B parameter model marginally outperforming the smaller model, generating smaller object files than -Oz in 61% of cases. Lastly, we evaluate disassembly performance by giving the model x86 assembly code and ask it to generate the corresponding IR. We then round-trip the model-generated disassembled IR back down to assembly. This enables us to evaluate accuracy of the disassembly by comparing the BLEU score of the original assembly against the round-trip result. LLM Compiler FTD 13B has the highest accuracy of round-tripped assembly (_round trip BLEU_) and most frequently produces perfect disassembly. Code Llama Instruct and GPT-4 Turbo struggle with generating syntactically correct LLVM-IR. <table> <tr> <td>Model </td> <td>Size </td> <td>Code Size Improvement </td> <td>Round trip BLEU </td> </tr> <tr> <td>GPT-4 Turbo </td> <td> </td> <td>-0.01% </td> <td>0.43 </td> </tr> <tr> <td>Code Llama Inst </td> <td>7B </td> <td>-0.49% </td> <td>0.48 </td> </tr> <tr> <td>Code Llama Inst </td> <td>13B </td> <td>-0.42% </td> <td>0.62 </td> </tr> <tr> <td>LLM Compiler FTD </td> <td>7B </td> <td>4.77% </td> <td>0.95 </td> </tr> <tr> <td>LLM Compiler FTD </td> <td>13B </td> <td><strong>4.88%</strong> </td> <td><strong>0.96</strong> </td> </tr> </table> ## Releasing LLM Compiler LLMs are being used to make programming easier. They are beginning to be used to make programs more efficient. At Meta, our conviction is that AI models, especially those designed for coding, thrive best with an open strategy, fostering both innovation and security. Models that are accessible to the public can expedite the creation of novel compiler optimization technologies. In turn, this will allow programs to be more efficient and smaller, enhancing the quality of life for all. By making models such as LLM Compiler available, the whole community can explore their potential, pinpoint problems, and rectify any vulnerabilities. The model weights are available on Hugging Face. ## Responsible use Our research paper provides an in-depth look into the development process of the LLM Compiler, the methods we used for our benchmarking tests, and further insights into the model's limitations. It also discusses the issues faced, the steps we took to mitigate them. Developers are advised to assess their models using evaluation benchmarks specific to compilers. Given that compilers are not bug-free, any suggested compiler optimizations must be rigorously tested. When a model decompiles assembly code, its accuracy should be confirmed. ## The future of generative AI for optimisation LLM Compiler is designed to support compiler researchers and engineers. But there are still many more use cases to support than what our models can serve. We hope that LLM Compiler will inspire others to leverage LLMs to create new innovative tools for research and commercial products. ### Try LLM Compiler today * Download the LLM Compiler and LLM Compiler FTD models: * [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) * [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) * [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) * [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) * Read the research paper * [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/) # **Model Card** LLM Compiler is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 13 billion parameters. This is the repository for the 13 billion parameter foundation model version in the Hugging Face Transformers format. This model is designed for code optimization. Links to other models can be found in the index at the bottom. | Number of parameters | Base Model | Fine-tuned for code size and dissassembly | | -------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) | [facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) | | 13B | [facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) | [facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) | ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers accelerate ``` Example code using each of the model's compiler capabilities may be found in [llm_compiler_demo.py](llm_compiler_demo.py). The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens). ```python from transformers import AutoTokenizer import transformers import torch model = "facebook/llm-compiler-13b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( '%3 = alloca i32, align 4', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the LLM Compiler family of large language models (LLMs). **Model Developers** Meta **Variations** LLM Compiler comes in two model sizes of 7B, 13B parameters in two flavors, the foundation and instruction fine-tuned for code size and disassembly. **This repository contains the 13 billion parameter foundation model.** **Input** Models input text only. **Example prompt** See `llm_compiler_demo.py` in the repo for examples of the different use cases. **Output** Models generate text only. **Model Architecture** LLM Compiler is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** LLM Compiler has been trained between January 2024 and June 2024. **Status** This is a static model trained on an offline dataset. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Meta Large Language Model Compiler: Foundation Models of Compiler Optimization](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/)". ## Intended Use **Intended Use Cases** LLM Compiler is intended for commercial and research use in English, relevant programming languages, LLVM IR, x86_64 assembly and ARM assembly. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for LLM Compiler and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all LLM Compiler models required 14K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of Code Llama. 100% of the estimated tCO2eq emissions were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Code Llama with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/llm-compiler-foundation-models-for-compiler-optimization/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations LLM Compiler and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, LLM Compilers’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of LLM Compiler, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
sshleifer/tiny-ctrl
sshleifer
"2020-05-13T23:21:48Z"
10,788
0
transformers
[ "transformers", "pytorch", "tf", "ctrl", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
Entry not found
apple/mobilevit-xx-small
apple
"2022-08-29T07:57:57Z"
10,788
9
transformers
[ "transformers", "pytorch", "tf", "coreml", "mobilevit", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2110.02178", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-05-30T12:46:35Z"
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileViT (extra extra small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-xx-small") model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-xx-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes. ## Training procedure ### Preprocessing Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping. To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320). At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |-------------------|-------------------------|-------------------------|-----------|-------------------------------------------------| | **MobileViT-XXS** | **69.0** | **88.9** | **1.3 M** | https://huggingface.co/apple/mobilevit-xx-small | | MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small | | MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
mradermacher/ChatWaifu-GGUF
mradermacher
"2024-06-21T12:57:22Z"
10,787
0
transformers
[ "transformers", "gguf", "nsfw", "Visual novel", "roleplay", "ja", "base_model:spow12/ChatWaifu", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-21T05:32:06Z"
--- base_model: spow12/ChatWaifu language: - ja library_name: transformers license: other quantized_by: mradermacher tags: - nsfw - Visual novel - roleplay --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/spow12/ChatWaifu <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChatWaifu-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ChatWaifu-GGUF/resolve/main/ChatWaifu.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
diffusers/controlnet-depth-sdxl-1.0-small
diffusers
"2023-08-16T14:04:15Z"
10,786
15
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2023-08-15T19:22:12Z"
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet inference: false --- # SDXL-controlnet: Depth These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with depth conditioning. This checkpoint is 7x smaller than the original XL controlnet checkpoint. You can find some example images in the following. prompt: donald trump, serious look, cigar in the mouth, 70mm, film still, head shot ![open](oppenheimer_small.png) prompt: spiderman lecture, photorealistic ![images_0)](./spiderman-small.png) prompt: aerial view, a futuristic research complex in a bright foggy jungle, hard lighting ![images_1)](./hf_logo_small.png) prompt: megatron in an apocalyptic world ground, runied city in the background, photorealistic ![images_2)](./megatron_small.png) ## Usage Make sure to first install the libraries: ```bash pip install accelerate transformers safetensors diffusers ``` And then we're ready to go: ```python import torch import numpy as np from PIL import Image from transformers import DPTFeatureExtractor, DPTForDepthEstimation from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL from diffusers.utils import load_image depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-depth-sdxl-1.0-small", variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda") vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda") pipe.enable_model_cpu_offload() def get_depth_map(image): image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") with torch.no_grad(), torch.autocast("cuda"): depth_map = depth_estimator(image).predicted_depth depth_map = torch.nn.functional.interpolate( depth_map.unsqueeze(1), size=(1024, 1024), mode="bicubic", align_corners=False, ) depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) depth_map = (depth_map - depth_min) / (depth_max - depth_min) image = torch.cat([depth_map] * 3, dim=1) image = image.permute(0, 2, 3, 1).cpu().numpy()[0] image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) return image prompt = "stormtrooper lecture, photorealistic" image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png") controlnet_conditioning_scale = 0.5 # recommended for good generalization depth_image = get_depth_map(image) images = pipe( prompt, image=depth_image, num_inference_steps=30, controlnet_conditioning_scale=controlnet_conditioning_scale, ).images images[0] images[0].save(f"stormtrooper_grid.png") ``` ![](./stormtrooper_grid.png) To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl). 🚨 Please note that this checkpoint is experimental and there's a lot of room for improvement. We encourage the community to build on top of it, improve it, and provide us with feedback. 🚨 ### Training Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md). You can refer to [this script](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py) for full discolsure. * This checkpoint does not perform distillation. We just use a smaller ControlNet initialized from the SDXL UNet. We encourage the community to try and conduct distillation too. This resource might be of help in [this regard](https://huggingface.co/blog/sd_distillation). * To learn more about how the ControlNet was initialized, refer to [this code block](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py#L981C1-L999C36). * It does not have any attention blocks. * The model works pretty good on most conditioning images. But for more complex conditionings, the bigger checkpoints might be better. We are still working on improving the quality of this checkpoint and looking for feedback from the community. * We recommend playing around with the `controlnet_conditioning_scale` and `guidance_scale` arguments for potentially better image generation quality. #### Training data The model was trained on 3M images from LAION aesthetic 6 plus subset, with batch size of 256 for 50k steps with constant learning rate of 3e-5. #### Compute One 8xA100 machine #### Mixed precision FP16
mradermacher/jackie-2.0-full-i1-GGUF
mradermacher
"2024-06-22T13:45:24Z"
10,783
0
transformers
[ "transformers", "gguf", "en", "base_model:satpalsr/jackie-2.0-full", "endpoints_compatible", "region:us" ]
null
"2024-06-22T05:25:16Z"
--- base_model: satpalsr/jackie-2.0-full language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/satpalsr/jackie-2.0-full <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/jackie-2.0-full-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/jackie-2.0-full-i1-GGUF/resolve/main/jackie-2.0-full.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF
mradermacher
"2024-07-02T00:22:33Z"
10,780
0
transformers
[ "transformers", "gguf", "en", "ja", "base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-07-01T20:08:15Z"
--- base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1 language: - en - ja library_name: transformers license: llama3 model_type: llama quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Swallow-8B-v0.1-i1-GGUF/resolve/main/Llama-3-Swallow-8B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
QuantFactory/Qwen2-7b-Matter-0.1-Slim-A-GGUF
QuantFactory
"2024-06-28T13:14:10Z"
10,775
2
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:unsloth/Qwen2-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T07:18:44Z"
--- base_model: unsloth/Qwen2-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- # QuantFactory/Qwen2-7b-Matter-0.1-Slim-A-GGUF This is quantized version of [munish0838/Qwen2-7b-Matter-0.1-Slim-A](https://huggingface.co/munish0838/Qwen2-7b-Matter-0.1-Slim-A) created using llama.cpp # Model Description - **Developed by:** munish0838 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7b-bnb-4bit - **Dataset Used :** [0-hero/Matter-0.1-Slim-7B-A](https://huggingface.co/0-hero/Matter-0.1-Slim-7B-A) This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SakuraLLM/Sakura-32B-Qwen2beta-v0.10pre1-GGUF
SakuraLLM
"2024-05-14T20:24:59Z"
10,774
3
null
[ "gguf", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-05-10T12:47:32Z"
--- license: cc-by-nc-sa-4.0 ---
mradermacher/EtherealRainbow-v0.3-8B-GGUF
mradermacher
"2024-06-20T11:09:15Z"
10,771
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "en", "base_model:invisietch/EtherealRainbow-v0.3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-20T02:24:16Z"
--- base_model: invisietch/EtherealRainbow-v0.3-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - mergekit - merge - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.3-8B-GGUF/resolve/main/EtherealRainbow-v0.3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/AceGPT-v1.5-13B-Chat-GGUF
mradermacher
"2024-06-23T02:29:47Z"
10,769
0
transformers
[ "transformers", "gguf", "ar", "zh", "en", "base_model:FreedomIntelligence/AceGPT-v1.5-13B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T01:42:12Z"
--- base_model: FreedomIntelligence/AceGPT-v1.5-13B-Chat language: - ar - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_L.gguf) | Q3_K_L | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ4_XS.gguf) | IQ4_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q5_K_S.gguf) | Q5_K_S | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q5_K_M.gguf) | Q5_K_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q6_K.gguf) | Q6_K | 10.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
joeddav/xlm-roberta-large-xnli
joeddav
"2023-03-22T18:23:34Z"
10,765
180
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "text-classification", "tensorflow", "zero-shot-classification", "multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur", "dataset:multi_nli", "dataset:xnli", "arxiv:1911.02116", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:05Z"
--- language: - multilingual - en - fr - es - de - el - bg - ru - tr - ar - vi - th - zh - hi - sw - ur tags: - text-classification - pytorch - tensorflow datasets: - multi_nli - xnli license: mit pipeline_tag: zero-shot-classification widget: - text: "За кого вы голосуете в 2020 году?" candidate_labels: "politique étrangère, Europe, élections, affaires, politique" multi_class: true - text: "لمن تصوت في 2020؟" candidate_labels: "السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة" multi_class: true - text: "2020'de kime oy vereceksiniz?" candidate_labels: "dış politika, Avrupa, seçimler, ticaret, siyaset" multi_class: true --- # xlm-roberta-large-xnli ## Model Description This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline). ## Intended Usage This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus: - English - French - Spanish - German - Greek - Bulgarian - Russian - Turkish - Arabic - Vietnamese - Thai - Chinese - Hindi - Swahili - Urdu Since the base model was pre-trained trained on 100 different languages, the model has shown some effectiveness in languages beyond those listed above as well. See the full list of pre-trained languages in appendix A of the [XLM Roberata paper](https://arxiv.org/abs/1911.02116) For English-only classification, it is recommended to use [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla). #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="joeddav/xlm-roberta-large-xnli") ``` You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to classify in another: ```python # we will classify the Russian translation of, "Who are you voting for in 2020?" sequence_to_classify = "За кого вы голосуете в 2020 году?" # we can specify candidate labels in Russian or any other language above: candidate_labels = ["Europe", "public health", "politics"] classifier(sequence_to_classify, candidate_labels) # {'labels': ['politics', 'Europe', 'public health'], # 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251], # 'sequence': 'За кого вы голосуете в 2020 году?'} ``` The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it may be worthwhile to translate this to the language you are working with: ```python sequence_to_classify = "¿A quién vas a votar en 2020?" candidate_labels = ["Europa", "salud pública", "política"] hypothesis_template = "Este ejemplo es {}." classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template) # {'labels': ['política', 'Europa', 'salud pública'], # 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046], # 'sequence': '¿A quién vas a votar en 2020?'} ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli') tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli') premise = sequence hypothesis = f'This example is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ``` ## Training This model was pre-trained on set of 100 languages, as described in [the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for each example come from the same original English example but the premise and hypothesis are of different languages.
DeepChem/ChemBERTa-77M-MLM
DeepChem
"2022-01-20T18:02:38Z"
10,762
12
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
Entry not found
neuralmagic/bge-small-en-v1.5-quant
neuralmagic
"2023-11-13T17:04:15Z"
10,760
9
transformers
[ "transformers", "onnx", "bert", "feature-extraction", "mteb", "sparse", "sparsity", "quantized", "embeddings", "int8", "deepsparse", "en", "license:mit", "model-index", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2023-09-27T23:33:48Z"
--- tags: - mteb - sparse - sparsity - quantized - onnx - embeddings - int8 - deepsparse model-index: - name: bge-small-en-v1.5-quant results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.19402985074626 - type: ap value: 37.562368912364036 - type: f1 value: 68.47046663470138 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.89432499999998 - type: ap value: 88.64572979375352 - type: f1 value: 91.87171177424113 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.71799999999999 - type: f1 value: 46.25791412217894 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 34.424 - type: map_at_10 value: 49.63 - type: map_at_100 value: 50.477000000000004 - type: map_at_1000 value: 50.483 - type: map_at_3 value: 45.389 - type: map_at_5 value: 47.888999999999996 - type: mrr_at_1 value: 34.78 - type: mrr_at_10 value: 49.793 - type: mrr_at_100 value: 50.632999999999996 - type: mrr_at_1000 value: 50.638000000000005 - type: mrr_at_3 value: 45.531 - type: mrr_at_5 value: 48.010000000000005 - type: ndcg_at_1 value: 34.424 - type: ndcg_at_10 value: 57.774 - type: ndcg_at_100 value: 61.248000000000005 - type: ndcg_at_1000 value: 61.378 - type: ndcg_at_3 value: 49.067 - type: ndcg_at_5 value: 53.561 - type: precision_at_1 value: 34.424 - type: precision_at_10 value: 8.364 - type: precision_at_100 value: 0.985 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.915 - type: precision_at_5 value: 14.124999999999998 - type: recall_at_1 value: 34.424 - type: recall_at_10 value: 83.64200000000001 - type: recall_at_100 value: 98.506 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 59.744 - type: recall_at_5 value: 70.626 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.91874634333147 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 39.1201020016146 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.40334669601722 - type: mrr value: 75.33175042870333 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.00433892980047 - type: cos_sim_spearman value: 86.65558896421105 - type: euclidean_pearson value: 85.98927300398377 - type: euclidean_spearman value: 86.0905158476729 - type: manhattan_pearson value: 86.0272425017433 - type: manhattan_spearman value: 85.8929209838941 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.1038961038961 - type: f1 value: 85.06851570045757 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.42637694389153 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.89440321125906 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.111000000000004 - type: map_at_10 value: 39.067 - type: map_at_100 value: 40.519 - type: map_at_1000 value: 40.652 - type: map_at_3 value: 35.571999999999996 - type: map_at_5 value: 37.708999999999996 - type: mrr_at_1 value: 34.335 - type: mrr_at_10 value: 44.868 - type: mrr_at_100 value: 45.607 - type: mrr_at_1000 value: 45.655 - type: mrr_at_3 value: 41.798 - type: mrr_at_5 value: 43.786 - type: ndcg_at_1 value: 34.335 - type: ndcg_at_10 value: 45.513 - type: ndcg_at_100 value: 51.037 - type: ndcg_at_1000 value: 53.171 - type: ndcg_at_3 value: 40.131 - type: ndcg_at_5 value: 43.027 - type: precision_at_1 value: 34.335 - type: precision_at_10 value: 8.784 - type: precision_at_100 value: 1.4460000000000002 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 19.361 - type: precision_at_5 value: 14.249 - type: recall_at_1 value: 28.111000000000004 - type: recall_at_10 value: 58.372 - type: recall_at_100 value: 81.631 - type: recall_at_1000 value: 95.192 - type: recall_at_3 value: 42.863 - type: recall_at_5 value: 50.924 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.437 - type: map_at_10 value: 37.942 - type: map_at_100 value: 39.108 - type: map_at_1000 value: 39.242 - type: map_at_3 value: 35.419 - type: map_at_5 value: 36.825 - type: mrr_at_1 value: 35.35 - type: mrr_at_10 value: 43.855 - type: mrr_at_100 value: 44.543 - type: mrr_at_1000 value: 44.588 - type: mrr_at_3 value: 41.826 - type: mrr_at_5 value: 42.937 - type: ndcg_at_1 value: 35.35 - type: ndcg_at_10 value: 43.32 - type: ndcg_at_100 value: 47.769 - type: ndcg_at_1000 value: 49.979 - type: ndcg_at_3 value: 39.709 - type: ndcg_at_5 value: 41.316 - type: precision_at_1 value: 35.35 - type: precision_at_10 value: 7.994 - type: precision_at_100 value: 1.323 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 18.96 - type: precision_at_5 value: 13.236 - type: recall_at_1 value: 28.437 - type: recall_at_10 value: 52.531000000000006 - type: recall_at_100 value: 71.79299999999999 - type: recall_at_1000 value: 85.675 - type: recall_at_3 value: 41.605 - type: recall_at_5 value: 46.32 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 37.364999999999995 - type: map_at_10 value: 49.324 - type: map_at_100 value: 50.458999999999996 - type: map_at_1000 value: 50.512 - type: map_at_3 value: 45.96 - type: map_at_5 value: 47.934 - type: mrr_at_1 value: 43.009 - type: mrr_at_10 value: 52.946000000000005 - type: mrr_at_100 value: 53.74100000000001 - type: mrr_at_1000 value: 53.76800000000001 - type: mrr_at_3 value: 50.554 - type: mrr_at_5 value: 51.964 - type: ndcg_at_1 value: 43.009 - type: ndcg_at_10 value: 55.143 - type: ndcg_at_100 value: 59.653999999999996 - type: ndcg_at_1000 value: 60.805 - type: ndcg_at_3 value: 49.605 - type: ndcg_at_5 value: 52.437 - type: precision_at_1 value: 43.009 - type: precision_at_10 value: 8.984 - type: precision_at_100 value: 1.209 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 22.09 - type: precision_at_5 value: 15.423 - type: recall_at_1 value: 37.364999999999995 - type: recall_at_10 value: 68.657 - type: recall_at_100 value: 88.155 - type: recall_at_1000 value: 96.48400000000001 - type: recall_at_3 value: 54.186 - type: recall_at_5 value: 60.848 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.827 - type: map_at_10 value: 31.721 - type: map_at_100 value: 32.812999999999995 - type: map_at_1000 value: 32.89 - type: map_at_3 value: 29.238999999999997 - type: map_at_5 value: 30.584 - type: mrr_at_1 value: 25.650000000000002 - type: mrr_at_10 value: 33.642 - type: mrr_at_100 value: 34.595 - type: mrr_at_1000 value: 34.650999999999996 - type: mrr_at_3 value: 31.205 - type: mrr_at_5 value: 32.499 - type: ndcg_at_1 value: 25.650000000000002 - type: ndcg_at_10 value: 36.366 - type: ndcg_at_100 value: 41.766 - type: ndcg_at_1000 value: 43.735 - type: ndcg_at_3 value: 31.447000000000003 - type: ndcg_at_5 value: 33.701 - type: precision_at_1 value: 25.650000000000002 - type: precision_at_10 value: 5.582 - type: precision_at_100 value: 0.872 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 13.107 - type: precision_at_5 value: 9.198 - type: recall_at_1 value: 23.827 - type: recall_at_10 value: 48.9 - type: recall_at_100 value: 73.917 - type: recall_at_1000 value: 88.787 - type: recall_at_3 value: 35.498000000000005 - type: recall_at_5 value: 40.929 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.47 - type: map_at_10 value: 22.679 - type: map_at_100 value: 23.823 - type: map_at_1000 value: 23.94 - type: map_at_3 value: 20.535999999999998 - type: map_at_5 value: 21.61 - type: mrr_at_1 value: 18.781 - type: mrr_at_10 value: 26.979 - type: mrr_at_100 value: 27.945999999999998 - type: mrr_at_1000 value: 28.016000000000002 - type: mrr_at_3 value: 24.648 - type: mrr_at_5 value: 25.947 - type: ndcg_at_1 value: 18.781 - type: ndcg_at_10 value: 27.55 - type: ndcg_at_100 value: 33.176 - type: ndcg_at_1000 value: 36.150999999999996 - type: ndcg_at_3 value: 23.456 - type: ndcg_at_5 value: 25.16 - type: precision_at_1 value: 18.781 - type: precision_at_10 value: 5.050000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.129 - type: precision_at_3 value: 11.235000000000001 - type: precision_at_5 value: 8.01 - type: recall_at_1 value: 15.47 - type: recall_at_10 value: 38.446000000000005 - type: recall_at_100 value: 63.199000000000005 - type: recall_at_1000 value: 84.719 - type: recall_at_3 value: 26.687 - type: recall_at_5 value: 31.196 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.285999999999998 - type: map_at_10 value: 35.701 - type: map_at_100 value: 37.062 - type: map_at_1000 value: 37.175999999999995 - type: map_at_3 value: 32.65 - type: map_at_5 value: 34.129 - type: mrr_at_1 value: 32.05 - type: mrr_at_10 value: 41.105000000000004 - type: mrr_at_100 value: 41.996 - type: mrr_at_1000 value: 42.047000000000004 - type: mrr_at_3 value: 38.466 - type: mrr_at_5 value: 39.766 - type: ndcg_at_1 value: 32.05 - type: ndcg_at_10 value: 41.516999999999996 - type: ndcg_at_100 value: 47.083999999999996 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 36.254999999999995 - type: ndcg_at_5 value: 38.346999999999994 - type: precision_at_1 value: 32.05 - type: precision_at_10 value: 7.536 - type: precision_at_100 value: 1.202 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.004 - type: precision_at_5 value: 11.973 - type: recall_at_1 value: 26.285999999999998 - type: recall_at_10 value: 53.667 - type: recall_at_100 value: 76.97 - type: recall_at_1000 value: 91.691 - type: recall_at_3 value: 38.571 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.595000000000002 - type: map_at_10 value: 31.352000000000004 - type: map_at_100 value: 32.652 - type: map_at_1000 value: 32.774 - type: map_at_3 value: 28.238000000000003 - type: map_at_5 value: 30.178 - type: mrr_at_1 value: 27.626 - type: mrr_at_10 value: 36.351 - type: mrr_at_100 value: 37.297000000000004 - type: mrr_at_1000 value: 37.362 - type: mrr_at_3 value: 33.885 - type: mrr_at_5 value: 35.358000000000004 - type: ndcg_at_1 value: 27.626 - type: ndcg_at_10 value: 36.795 - type: ndcg_at_100 value: 42.808 - type: ndcg_at_1000 value: 45.417 - type: ndcg_at_3 value: 31.744 - type: ndcg_at_5 value: 34.407 - type: precision_at_1 value: 27.626 - type: precision_at_10 value: 6.781 - type: precision_at_100 value: 1.159 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 15.221000000000002 - type: precision_at_5 value: 11.279 - type: recall_at_1 value: 22.595000000000002 - type: recall_at_10 value: 48.126000000000005 - type: recall_at_100 value: 74.24300000000001 - type: recall_at_1000 value: 92.276 - type: recall_at_3 value: 34.346 - type: recall_at_5 value: 41.065000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.237000000000002 - type: map_at_10 value: 28.626 - type: map_at_100 value: 29.494999999999997 - type: map_at_1000 value: 29.587999999999997 - type: map_at_3 value: 26.747 - type: map_at_5 value: 27.903 - type: mrr_at_1 value: 24.847 - type: mrr_at_10 value: 31.091 - type: mrr_at_100 value: 31.91 - type: mrr_at_1000 value: 31.977 - type: mrr_at_3 value: 29.218 - type: mrr_at_5 value: 30.391000000000002 - type: ndcg_at_1 value: 24.847 - type: ndcg_at_10 value: 32.452999999999996 - type: ndcg_at_100 value: 37.009 - type: ndcg_at_1000 value: 39.425 - type: ndcg_at_3 value: 28.848000000000003 - type: ndcg_at_5 value: 30.752000000000002 - type: precision_at_1 value: 24.847 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.8009999999999999 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 12.321 - type: precision_at_5 value: 8.62 - type: recall_at_1 value: 22.237000000000002 - type: recall_at_10 value: 41.942 - type: recall_at_100 value: 62.907000000000004 - type: recall_at_1000 value: 81.035 - type: recall_at_3 value: 32.05 - type: recall_at_5 value: 36.695 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.835 - type: map_at_10 value: 21.124000000000002 - type: map_at_100 value: 22.133 - type: map_at_1000 value: 22.258 - type: map_at_3 value: 19.076999999999998 - type: map_at_5 value: 20.18 - type: mrr_at_1 value: 17.791 - type: mrr_at_10 value: 24.438 - type: mrr_at_100 value: 25.332 - type: mrr_at_1000 value: 25.417 - type: mrr_at_3 value: 22.425 - type: mrr_at_5 value: 23.524 - type: ndcg_at_1 value: 17.791 - type: ndcg_at_10 value: 25.27 - type: ndcg_at_100 value: 30.362000000000002 - type: ndcg_at_1000 value: 33.494 - type: ndcg_at_3 value: 21.474 - type: ndcg_at_5 value: 23.189999999999998 - type: precision_at_1 value: 17.791 - type: precision_at_10 value: 4.58 - type: precision_at_100 value: 0.839 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 10.071 - type: precision_at_5 value: 7.337000000000001 - type: recall_at_1 value: 14.835 - type: recall_at_10 value: 34.534 - type: recall_at_100 value: 57.812 - type: recall_at_1000 value: 80.467 - type: recall_at_3 value: 23.938000000000002 - type: recall_at_5 value: 28.269 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.400000000000002 - type: map_at_10 value: 31.55 - type: map_at_100 value: 32.72 - type: map_at_1000 value: 32.830999999999996 - type: map_at_3 value: 28.942 - type: map_at_5 value: 30.403000000000002 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.778 - type: mrr_at_100 value: 36.705 - type: mrr_at_1000 value: 36.773 - type: mrr_at_3 value: 33.458 - type: mrr_at_5 value: 34.778 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 36.541000000000004 - type: ndcg_at_100 value: 42.016999999999996 - type: ndcg_at_1000 value: 44.571 - type: ndcg_at_3 value: 31.845000000000002 - type: ndcg_at_5 value: 34.056 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 6.166 - type: precision_at_100 value: 0.993 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 14.302999999999999 - type: precision_at_5 value: 10.187 - type: recall_at_1 value: 23.400000000000002 - type: recall_at_10 value: 47.61 - type: recall_at_100 value: 71.69200000000001 - type: recall_at_1000 value: 89.652 - type: recall_at_3 value: 35.026 - type: recall_at_5 value: 40.48 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.409 - type: map_at_10 value: 29.642000000000003 - type: map_at_100 value: 31.213 - type: map_at_1000 value: 31.418000000000003 - type: map_at_3 value: 26.811 - type: map_at_5 value: 28.433999999999997 - type: mrr_at_1 value: 25.494 - type: mrr_at_10 value: 33.735 - type: mrr_at_100 value: 34.791 - type: mrr_at_1000 value: 34.848 - type: mrr_at_3 value: 31.225 - type: mrr_at_5 value: 32.688 - type: ndcg_at_1 value: 25.494 - type: ndcg_at_10 value: 35.038000000000004 - type: ndcg_at_100 value: 41.499 - type: ndcg_at_1000 value: 44.183 - type: ndcg_at_3 value: 30.305 - type: ndcg_at_5 value: 32.607 - type: precision_at_1 value: 25.494 - type: precision_at_10 value: 6.739000000000001 - type: precision_at_100 value: 1.439 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 14.163 - type: precision_at_5 value: 10.474 - type: recall_at_1 value: 21.409 - type: recall_at_10 value: 46.033 - type: recall_at_100 value: 74.932 - type: recall_at_1000 value: 92.35600000000001 - type: recall_at_3 value: 32.858 - type: recall_at_5 value: 38.675 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.145 - type: map_at_10 value: 24.712 - type: map_at_100 value: 25.813000000000002 - type: map_at_1000 value: 25.935000000000002 - type: map_at_3 value: 22.33 - type: map_at_5 value: 23.524 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 26.194 - type: mrr_at_100 value: 27.208 - type: mrr_at_1000 value: 27.3 - type: mrr_at_3 value: 23.906 - type: mrr_at_5 value: 24.988 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 29.015 - type: ndcg_at_100 value: 34.224 - type: ndcg_at_1000 value: 37.235 - type: ndcg_at_3 value: 24.22 - type: ndcg_at_5 value: 26.176 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 4.713 - type: precision_at_100 value: 0.787 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 10.290000000000001 - type: precision_at_5 value: 7.32 - type: recall_at_1 value: 18.145 - type: recall_at_10 value: 40.875 - type: recall_at_100 value: 64.371 - type: recall_at_1000 value: 86.67399999999999 - type: recall_at_3 value: 27.717000000000002 - type: recall_at_5 value: 32.381 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.845 - type: f1 value: 41.70045120106269 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 89.3476 - type: ap value: 85.26891728027032 - type: f1 value: 89.33488973832894 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.67441860465115 - type: f1 value: 92.48821366022861 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.02872777017784 - type: f1 value: 57.28822860484337 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.01479488903833 - type: f1 value: 71.83716204573571 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.95897780766644 - type: f1 value: 77.80380046125542 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.897956840478948 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.71493744677591 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.279419910393734 - type: mrr value: 32.41989483774563 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 50.49612915002382 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 60.29912718965653 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.86793477948164 - type: cos_sim_spearman value: 79.43675709317894 - type: euclidean_pearson value: 81.42564463337872 - type: euclidean_spearman value: 79.39138648510273 - type: manhattan_pearson value: 81.31167449689285 - type: manhattan_spearman value: 79.28411420758785 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.43490408077298 - type: cos_sim_spearman value: 76.16878340109265 - type: euclidean_pearson value: 80.6016219080782 - type: euclidean_spearman value: 75.67063072565917 - type: manhattan_pearson value: 80.7238920179759 - type: manhattan_spearman value: 75.85631683403953 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.03882477767792 - type: cos_sim_spearman value: 84.15171505206217 - type: euclidean_pearson value: 84.11692506470922 - type: euclidean_spearman value: 84.78589046217311 - type: manhattan_pearson value: 83.98651139454486 - type: manhattan_spearman value: 84.64928563751276 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.11158600428418 - type: cos_sim_spearman value: 81.48561519933875 - type: euclidean_pearson value: 83.21025907155807 - type: euclidean_spearman value: 81.68699235487654 - type: manhattan_pearson value: 83.16704771658094 - type: manhattan_spearman value: 81.7133110412898 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.1514510686502 - type: cos_sim_spearman value: 88.11449450494452 - type: euclidean_pearson value: 87.75854949349939 - type: euclidean_spearman value: 88.4055148221637 - type: manhattan_pearson value: 87.71487828059706 - type: manhattan_spearman value: 88.35301381116254 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.36838640113687 - type: cos_sim_spearman value: 84.98776974283366 - type: euclidean_pearson value: 84.0617526427129 - type: euclidean_spearman value: 85.04234805662242 - type: manhattan_pearson value: 83.87433162971784 - type: manhattan_spearman value: 84.87174280390242 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.72465270691285 - type: cos_sim_spearman value: 87.97672332532184 - type: euclidean_pearson value: 88.78764701492182 - type: euclidean_spearman value: 88.3509718074474 - type: manhattan_pearson value: 88.73024739256215 - type: manhattan_spearman value: 88.24149566970154 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.65195562203238 - type: cos_sim_spearman value: 65.0726777678982 - type: euclidean_pearson value: 65.84698245675273 - type: euclidean_spearman value: 65.13121502162804 - type: manhattan_pearson value: 65.96149904857049 - type: manhattan_spearman value: 65.39983948112955 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.2642818050049 - type: cos_sim_spearman value: 86.30633382439257 - type: euclidean_pearson value: 86.46510435905633 - type: euclidean_spearman value: 86.62650496446 - type: manhattan_pearson value: 86.2546330637872 - type: manhattan_spearman value: 86.46309860938591 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.009977767778 - type: mrr value: 95.59795143128476 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.84257425742574 - type: cos_sim_ap value: 96.25445889914926 - type: cos_sim_f1 value: 92.03805708562844 - type: cos_sim_precision value: 92.1765295887663 - type: cos_sim_recall value: 91.9 - type: dot_accuracy value: 99.83069306930693 - type: dot_ap value: 96.00517778550396 - type: dot_f1 value: 91.27995920448751 - type: dot_precision value: 93.1321540062435 - type: dot_recall value: 89.5 - type: euclidean_accuracy value: 99.84455445544555 - type: euclidean_ap value: 96.14761524546034 - type: euclidean_f1 value: 91.97751660705163 - type: euclidean_precision value: 94.04388714733543 - type: euclidean_recall value: 90 - type: manhattan_accuracy value: 99.84158415841584 - type: manhattan_ap value: 96.17014673429341 - type: manhattan_f1 value: 91.93790686029043 - type: manhattan_precision value: 92.07622868605817 - type: manhattan_recall value: 91.8 - type: max_accuracy value: 99.84455445544555 - type: max_ap value: 96.25445889914926 - type: max_f1 value: 92.03805708562844 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 59.26454683321409 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.75520575713765 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.74607778008495 - type: mrr value: 53.55101699770818 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.5008 - type: ap value: 13.64158304183089 - type: f1 value: 53.50073331072236 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.01980758347483 - type: f1 value: 60.35679678249753 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 45.09419243325077 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.68874053764081 - type: cos_sim_ap value: 73.26334732095694 - type: cos_sim_f1 value: 68.01558376272465 - type: cos_sim_precision value: 64.93880489560834 - type: cos_sim_recall value: 71.39841688654354 - type: dot_accuracy value: 84.71121177802945 - type: dot_ap value: 70.33606362522605 - type: dot_f1 value: 65.0887573964497 - type: dot_precision value: 63.50401606425703 - type: dot_recall value: 66.75461741424802 - type: euclidean_accuracy value: 85.80795136198367 - type: euclidean_ap value: 73.43201285001163 - type: euclidean_f1 value: 68.33166833166834 - type: euclidean_precision value: 64.86486486486487 - type: euclidean_recall value: 72.18997361477572 - type: manhattan_accuracy value: 85.62317458425225 - type: manhattan_ap value: 73.21212085536185 - type: manhattan_f1 value: 68.01681314482232 - type: manhattan_precision value: 65.74735286875153 - type: manhattan_recall value: 70.44854881266491 - type: max_accuracy value: 85.80795136198367 - type: max_ap value: 73.43201285001163 - type: max_f1 value: 68.33166833166834 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.81709162882757 - type: cos_sim_ap value: 85.63540257309367 - type: cos_sim_f1 value: 77.9091382258904 - type: cos_sim_precision value: 75.32710280373833 - type: cos_sim_recall value: 80.67446874037573 - type: dot_accuracy value: 88.04478596654636 - type: dot_ap value: 84.16371725220706 - type: dot_f1 value: 76.45949643213666 - type: dot_precision value: 73.54719396827655 - type: dot_recall value: 79.61194949183862 - type: euclidean_accuracy value: 88.9296386851399 - type: euclidean_ap value: 85.71894615274715 - type: euclidean_f1 value: 78.12952767313823 - type: euclidean_precision value: 73.7688098495212 - type: euclidean_recall value: 83.03818909762857 - type: manhattan_accuracy value: 88.89276982186519 - type: manhattan_ap value: 85.6838514059479 - type: manhattan_f1 value: 78.06861875184856 - type: manhattan_precision value: 75.09246088193457 - type: manhattan_recall value: 81.29042192793348 - type: max_accuracy value: 88.9296386851399 - type: max_ap value: 85.71894615274715 - type: max_f1 value: 78.12952767313823 license: mit language: - en --- # bge-small-en-v1.5-quant <div> <img src="https://huggingface.co/zeroshot/bge-small-en-v1.5-quant/resolve/main/latency.png" alt="latency" width="500" style="display:inline-block; margin-right:10px;"/> </div> [DeepSparse](https://github.com/neuralmagic/deepsparse) is able to improve latency performance on a 10 core laptop by 3X and up to 5X on a 16 core AWS instance. ## Usage This is the quantized (INT8) ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model accelerated with [Sparsify](https://github.com/neuralmagic/sparsify) for quantization and [DeepSparseSentenceTransformers](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers) for inference. ```bash pip install -U deepsparse-nightly[sentence_transformers] ``` ```python from deepsparse.sentence_transformers import DeepSparseSentenceTransformer model = DeepSparseSentenceTransformer('neuralmagic/bge-small-en-v1.5-quant', export=False) # Our sentences we like to encode sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] # Sentences are encoded by calling model.encode() embeddings = model.encode(sentences) # Print the embeddings for sentence, embedding in zip(sentences, embeddings): print("Sentence:", sentence) print("Embedding:", embedding.shape) print("") ``` For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
SanctumAI/granite-3b-code-instruct-GGUF
SanctumAI
"2024-05-31T12:43:47Z"
10,760
0
transformers
[ "transformers", "gguf", "ibm-granite-code", "code", "granite", "text-generation", "dataset:bigcode/commitpackft", "dataset:TIGER-Lab/MathInstruct", "dataset:meta-math/MetaMathQA", "dataset:glaiveai/glaive-code-assistant-v3", "dataset:glaive-function-calling-v2", "dataset:bugdaryan/sql-create-context-instruction", "dataset:garage-bAInd/Open-Platypus", "dataset:nvidia/HelpSteer", "arxiv:2405.04324", "base_model:ibm-granite/granite-3b-code-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-30T19:50:56Z"
--- pipeline_tag: text-generation base_model: ibm-granite/granite-3b-code-base license: apache-2.0 datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-3b-code-instruct results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Python) metrics: - name: pass@1 type: pass@1 value: 51.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(JavaScript) metrics: - name: pass@1 type: pass@1 value: 43.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Java) metrics: - name: pass@1 type: pass@1 value: 41.5 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Go) metrics: - name: pass@1 type: pass@1 value: 31.7 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(C++) metrics: - name: pass@1 type: pass@1 value: 40.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Rust) metrics: - name: pass@1 type: pass@1 value: 29.3 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Python) metrics: - name: pass@1 type: pass@1 value: 39.6 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(JavaScript) metrics: - name: pass@1 type: pass@1 value: 26.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Java) metrics: - name: pass@1 type: pass@1 value: 39 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Go) metrics: - name: pass@1 type: pass@1 value: 14 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(C++) metrics: - name: pass@1 type: pass@1 value: 23.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Rust) metrics: - name: pass@1 type: pass@1 value: 12.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Python) metrics: - name: pass@1 type: pass@1 value: 26.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(JavaScript) metrics: - name: pass@1 type: pass@1 value: 28 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Java) metrics: - name: pass@1 type: pass@1 value: 33.5 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Go) metrics: - name: pass@1 type: pass@1 value: 27.4 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(C++) metrics: - name: pass@1 type: pass@1 value: 31.7 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Rust) metrics: - name: pass@1 type: pass@1 value: 16.5 veriefied: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a28db2f1968b7d7f357182/kaudiTlvRQBA5NSeq4BbM.png) *This model was quantized by [SanctumAI](https://sanctum.ai). To leave feedback, join our community in [Discord](https://discord.gg/7ZNE78HJKh).* # Granite 3B Code Instruct GGUF **Model creator:** [ibm-granite](https://huggingface.co/ibm-granite)<br> **Original model**: [granite-3b-code-instruct](https://huggingface.co/ibm-granite/granite-3b-code-instruct)<br> ## Model Summary: **Granite-3B-Code-Instruct** is a 3B parameter model fine tuned from *Granite-3B-Code-Base* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324) - **Release Date**: May 6th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Prompt Template: If you're using Sanctum app, simply use `IBM Granite Code` model preset. Prompt template: ``` System: {system_prompt} Question: {prompt} Answer: ``` ## Hardware Requirements Estimate | Name | Quant method | Size | Memory (RAM, vRAM) required (for full context of 32k tokens) | | ---- | ---- | ---- | ---- | | [granite-3b-code-instruct.Q2_K.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q2_K.gguf) | Q2_K | 1.34 GB | 4.68 GB | | [granite-3b-code-instruct.Q3_K_S.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q3_K_S.gguf) | Q3_K_S | 1.55 GB | ? | | [granite-3b-code-instruct.Q3_K_M.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q3_K_M.gguf) | Q3_K_M | 1.73 GB | ? | | [granite-3b-code-instruct.Q3_K_L.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q3_K_L.gguf) | Q3_K_L | 1.88 GB | ? | | [granite-3b-code-instruct.Q4_0.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q4_0.gguf) | Q4_0 | 2.00 GB | ? | | [granite-3b-code-instruct.Q4_K_S.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q4_K_S.gguf) | Q4_K_S | 2.01 GB | ? | | [granite-3b-code-instruct.Q4_K_M.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q4_K_M.gguf) | Q4_K_M | 2.13 GB | ? | | [granite-3b-code-instruct.Q4_K.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q4_K.gguf) | Q4_K | 2.13 GB | ? | | [granite-3b-code-instruct.Q4_1.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q4_1.gguf) | Q4_1 | 2.21 GB | ? | | [granite-3b-code-instruct.Q5_0.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q5_0.gguf) | Q5_0 | 2.42 GB | ? | | [granite-3b-code-instruct.Q5_K_S.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q5_K_S.gguf) | Q5_K_S | 2.42 GB | ? | | [granite-3b-code-instruct.Q5_K_M.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q5_K_M.gguf) | Q5_K_M | 2.49 GB | ? | | [granite-3b-code-instruct.Q5_K.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q5_K.gguf) | Q5_K | 2.49 GB | ? | | [granite-3b-code-instruct.Q5_1.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q5_1.gguf) | Q5_1 | 2.63 GB | ? | | [granite-3b-code-instruct.Q6_K.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q6_K.gguf) | Q6_K | 2.86 GB | ? | | [granite-3b-code-instruct.Q8_0.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.Q8_0.gguf) | Q8_0 | 3.71 GB | ? | | [granite-3b-code-instruct.f16.gguf](https://huggingface.co/SanctumAI/granite-3b-code-instruct-GGUF/blob/main/granite-3b-code-instruct.f16.gguf) | f16 | 6.97 GB | 4.68 GB | ## Disclaimer Sanctum is not the creator, originator, or owner of any Model featured in the Models section of the Sanctum application. Each Model is created and provided by third parties. Sanctum does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model listed there. You understand that supported Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. Sanctum may not monitor or control the Models supported and cannot, and does not, take responsibility for any such Model. Sanctum disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. Sanctum further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through Sanctum.
mradermacher/FinChat_Mistral7B_DPO-GGUF
mradermacher
"2024-06-30T04:32:54Z"
10,760
0
transformers
[ "transformers", "gguf", "en", "base_model:gkMSDA/FinChat_Mistral7B_DPO", "endpoints_compatible", "region:us" ]
null
"2024-06-30T04:04:02Z"
--- base_model: gkMSDA/FinChat_Mistral7B_DPO language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/gkMSDA/FinChat_Mistral7B_DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/FinChat_Mistral7B_DPO-GGUF/resolve/main/FinChat_Mistral7B_DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Aura_Revived_Base-GGUF
mradermacher
"2024-06-24T15:13:52Z"
10,759
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jeiku/Aura_Revived_Base", "endpoints_compatible", "region:us" ]
null
"2024-06-24T14:11:56Z"
--- base_model: jeiku/Aura_Revived_Base language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jeiku/Aura_Revived_Base <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF/resolve/main/Aura_Revived_Base.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/vietrag-7b-v1.0-GGUF
mradermacher
"2024-06-20T22:38:09Z"
10,755
0
transformers
[ "transformers", "gguf", "vi", "base_model:llm4fun/vietrag-7b-v1.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T19:50:51Z"
--- base_model: llm4fun/vietrag-7b-v1.0 language: - vi library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/llm4fun/vietrag-7b-v1.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/vietrag-7b-v1.0-GGUF/resolve/main/vietrag-7b-v1.0.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lmms-lab/LLaVA-NeXT-Video-7B-32K
lmms-lab
"2024-04-19T07:39:07Z"
10,753
6
transformers
[ "transformers", "safetensors", "llava_mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-19T06:53:22Z"
--- inference: false license: apache-2.0 --- <br> # LLaVA-Next-Video Model Card ## Model details **Model type:** <br> LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. <br> Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) **Model date:** <br> LLaVA-Next-Video-7B-34K was trained in April 2024. **Paper or resources for more information:** <br> https://github.com/LLaVA-VL/LLaVA-NeXT ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## Where to send questions or comments about the model https://github.com/LLaVA-VL/LLaVA-NeXT/issues ## Intended use **Primary intended uses:** <br> The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** <br> The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset ### Image - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ### Video - 100K VideoChatGPT-Instruct. ## Evaluation dataset A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.
01-ai/Yi-34B
01-ai
"2024-06-26T10:25:39Z"
10,752
1,265
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T07:03:50Z"
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start Getting up and running with Yi models is simple with multiple choices available. ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF
mradermacher
"2024-06-23T18:43:48Z"
10,741
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "nsfw", "rp", "roleplay", "role-play", "en", "base_model:Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL", "endpoints_compatible", "region:us" ]
null
"2024-06-23T18:15:20Z"
--- base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - not-for-all-audiences - nsfw - rp - roleplay - role-play --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-8B-EXPERIMENTAL.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
huggyllama/llama-13b
huggyllama
"2023-04-07T15:50:53Z"
10,734
137
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-03T23:37:51Z"
--- license: other --- This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
mradermacher/YamWizard28-7B-GGUF
mradermacher
"2024-06-22T17:30:11Z"
10,734
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "en", "base_model:v000000/YamWizard28-7B", "endpoints_compatible", "region:us" ]
null
"2024-06-22T16:20:12Z"
--- base_model: v000000/YamWizard28-7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - mistral --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/v000000/YamWizard28-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/YamWizard28-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/YamWizard28-7B-GGUF/resolve/main/YamWizard28-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Yi-6B-i1-GGUF
mradermacher
"2024-06-26T19:45:47Z"
10,719
0
transformers
[ "transformers", "gguf", "en", "base_model:01-ai/Yi-6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T16:45:05Z"
--- base_model: 01-ai/Yi-6B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/01-ai/Yi-6B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Yi-6B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q2_K.gguf) | i1-Q2_K | 2.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q4_0.gguf) | i1-Q4_0 | 3.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-i1-GGUF/resolve/main/Yi-6B.i1-Q6_K.gguf) | i1-Q6_K | 5.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
thodsapon/V2_test
thodsapon
"2024-07-01T16:07:06Z"
10,719
0
transformers
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-07-01T12:57:11Z"
typh1.5 8b, 4096, sample พี่เอ็ม + schema, r128a24, epoch1, learning2e-4
mrm8488/t5-base-finetuned-summarize-news
mrm8488
"2023-03-17T00:50:19Z"
10,713
34
transformers
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "news", "summary", "en", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- language: en tags: - news - summary --- # T5-base fine-tuned fo News Summarization 📖✏️🧾 All credits to [Abhishek Kumar Mishra](https://github.com/abhimishra91) [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) dataset for **summarization** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Summarization) - Dataset 📚 [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) The dataset consists of **4515 examples** and contains Author_name, Headlines, Url of Article, Short text, Complete Article. I gathered the summarized news from Inshorts and only scraped the news articles from Hindu, Indian times and Guardian. Time period ranges from febrauary to august 2017. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) created by [Abhishek Kumar Mishra](https://github.com/abhimishra91), so all credits to him! I also trained the model for more epochs (6). ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") def summarize(text, max_length=150): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids, num_beams=2, max_length=max_length, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] return preds[0] ``` Given the following article from **NYT** (2020/06/09) with title *George Floyd’s death energized a movement. He will be buried in Houston today*: After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet. ``` summarize('After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet.', 80) ``` We would obtain: At a private funeral in Houston. Floyd, who was 46 years old when his death occurred, will be buried next to the grave of his mother. A Minnesota police officer was caught on video pressing his knee into Mr's neck for nearly nine minutes before his death. The officer has been charged with second-degree manslaughter and $1.2 million bail is set at > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mradermacher/OpenCAI-8B-V2-GGUF
mradermacher
"2024-06-20T13:03:36Z"
10,710
1
transformers
[ "transformers", "gguf", "art", "not-for-all-audiences", "en", "dataset:Norquinal/OpenCAI", "base_model:Norquinal/OpenCAI-8B-V2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-19T12:47:38Z"
--- base_model: Norquinal/OpenCAI-8B-V2 datasets: Norquinal/OpenCAI language: en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - art - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Norquinal/OpenCAI-8B-V2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenCAI-8B-V2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OpenCAI-8B-V2-GGUF/resolve/main/OpenCAI-8B-V2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
hf-tiny-model-private/tiny-random-OwlViTModel
hf-tiny-model-private
"2023-03-29T19:16:14Z"
10,709
0
transformers
[ "transformers", "pytorch", "owlvit", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-03-29T19:16:09Z"
Entry not found
digiplay/Blazarot_blazaroshi
digiplay
"2024-04-07T21:07:58Z"
10,692
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-04-07T20:57:41Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/255697?modelVersionId=389177
fangyuan/hotpotqa_extractive_compressor
fangyuan
"2024-03-08T18:00:21Z"
10,686
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2024-03-08T17:06:43Z"
Entry not found
mradermacher/hyperchat_bilingual_v4-GGUF
mradermacher
"2024-07-01T12:09:08Z"
10,683
0
transformers
[ "transformers", "gguf", "en", "base_model:HFInternal/hyperchat_bilingual_v4", "endpoints_compatible", "region:us" ]
null
"2024-07-01T11:20:10Z"
--- base_model: HFInternal/hyperchat_bilingual_v4 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/HFInternal/hyperchat_bilingual_v4 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/hyperchat_bilingual_v4-GGUF/resolve/main/hyperchat_bilingual_v4.f16.gguf) | f16 | 15.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-Inca-8B-v0.8-GGUF
mradermacher
"2024-06-23T02:46:35Z"
10,681
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Ppoyaa/L3-Inca-8B-v0.8", "endpoints_compatible", "region:us" ]
null
"2024-06-23T00:19:36Z"
--- base_model: Ppoyaa/L3-Inca-8B-v0.8 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.8 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF/resolve/main/L3-Inca-8B-v0.8.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
llamafactory/tiny-random-Llama-3
llamafactory
"2024-06-15T10:15:08Z"
10,669
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T17:30:09Z"
--- license: apache-2.0 library_name: transformers inference: false --- A tiny version of https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
OpenGVLab/InternViT-6B-448px-V1-5
OpenGVLab
"2024-05-30T07:26:25Z"
10,665
41
transformers
[ "transformers", "safetensors", "intern_vit_6b", "feature-extraction", "image-feature-extraction", "custom_code", "dataset:laion/laion2B-en", "dataset:laion/laion-coco", "dataset:laion/laion2B-multi", "dataset:kakaobrain/coyo-700m", "dataset:conceptual_captions", "dataset:wanng/wukong100m", "arxiv:2312.14238", "arxiv:2404.16821", "license:mit", "region:us" ]
image-feature-extraction
"2024-04-17T09:11:24Z"
--- license: mit datasets: - laion/laion2B-en - laion/laion-coco - laion/laion2B-multi - kakaobrain/coyo-700m - conceptual_captions - wanng/wukong100m pipeline_tag: image-feature-extraction --- # Model Card for InternViT-6B-448px-V1-5 <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AUE-3OBtfr9vDA7Elgkhd.webp" alt="Image Description" width="300" height="300"> </p> [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376) We develop InternViT-6B-448px-V1-5 based on the pre-training of the strong foundation of [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2). In this update, the resolution of training images is expanded from 448&times;448 to dynamic 448&times;448, where the basic tile size is 448&times;448 and the number of tiles ranges from 1 to 12. Additionally, we enhance the data scale, quality, and diversity of the pre-training dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our 1.5 version model. ## Model Details - **Model Type:** vision foundation model, feature backbone - **Model Stats:** - Params (M): 5540 (the last 3 blocks are discarded) - Image size: 448 x 448, training with 1 - 12 tiles - **Pretrain Dataset:** LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong-OCR, LaionCOCO-OCR, and other OCR-related datasets. To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO. - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.** ## Released Models ### Vision Foundation model | Model | Date | Download | Note | | ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- | | InternViT-6B-448px-V1-5 | 2024.04.20 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (🔥new) | | InternViT-6B-448px-V1-2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution | | InternViT-6B-448px-V1-0 | 2024.01.30 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution | | InternViT-6B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model | | InternVL-14B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model | ### Multimodal Large Language Model (MLLM) | Model | Date | Download | Note | | ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- | | InternVL-Chat-V1-5 | 2024.04.18 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)| | InternVL-Chat-V1-2-Plus | 2024.02.21 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) | more SFT data and stronger | | InternVL-Chat-V1-2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) | scaling up LLM to 34B | | InternVL-Chat-V1-1 | 2024.01.24 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1) | support Chinese and stronger OCR | ## Model Usage (Image Embeddings) ```python import torch from PIL import Image from transformers import AutoModel, CLIPImageProcessor model = AutoModel.from_pretrained( 'OpenGVLab/InternViT-6B-448px-V1-5', torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True).cuda().eval() image = Image.open('./examples/image1.jpg').convert('RGB') image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5') pixel_values = image_processor(images=image, return_tensors='pt').pixel_values pixel_values = pixel_values.to(torch.bfloat16).cuda() outputs = model(pixel_values) ``` ## Citation If you find this project useful in your research, please consider citing: ```BibTeX @article{chen2023internvl, title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks}, author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng}, journal={arXiv preprint arXiv:2312.14238}, year={2023} } @article{chen2024far, title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites}, author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others}, journal={arXiv preprint arXiv:2404.16821}, year={2024} } ``` ## Acknowledgement InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
Yntec/FotoPhoto
Yntec
"2023-11-29T22:17:28Z"
10,648
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Film", "artwork", "Real", "HDR photography", "photos", "Fenn", "Dunkindont", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-11-22T14:56:05Z"
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers - Film - artwork - Real - HDR photography - safetensors - photos - Fenn - Dunkindont inference: true --- # FotoPhoto A mix of Foto Assisted Diffusion and FennPhoto to bring my favorite things from both models together! Samples and prompts (scroll down to generate more examples in real time!*): ![We Start With The Bonus Samples!](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/WKFe9NLHxb6vhMPdOQGee.png) (Click for larger) Top left: young guy together with pretty ladies standing, he, photoreal, cute face, is on top of Closeup a of rocks on pile top of a next to the ocean moon. Top right: An intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, of fantasy by thomas kinkade Bottom left: a long pier, gloomy, cinematic, cold, landscape. chocolate Bottom right: young cowboy dad with pretty daughter ride wine, cute face, sunset, ocean ![Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/UcXrW_vd4rgrj4OOsxbez.png) (Click for larger) Top left: a lighthouse on top of a rocky outcropping with ships in the background. close up of pretty cute little Swedish girl Top right: city lights, reflections, water, shrimps Bottom left: vertical mountain peaks. movie still Bottom right: calm water in european city. veggies ![Many Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/nrAMvlyDVLR7Tle_GHua9.png) (Click for larger) Top left: spanakopita on a plate. green Top right: close up, berry cheescake on top of a cliff next to the ocean. Rainbow Bottom left: delicious plate of pepperoni pizza with pirate peppers Bottom right: anime, manga, digital art, trending on artstation, digital painting, a painting of a closeup of a beautiful cute girl standing behind a skyscraper bar ![Way Too Many Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/LHeydyzUXW1OioMRFQ_A3.png) Top left: digital painting, anime, trending on artstation close up of pretty cute asian girl, tattoos, centered, (messy bun), blue eyes, pale skin, behind trees, (high detailed skin:1.2), beach, Fujifilm XT3, (high detailed face:1.3) Top right: digital painting, trending on snow, of a lighthouse on top of a rocky outcropping with the ocean and mountains in the background Bottom left: Mystery village landscape with a blue portal to another dimension, concept art, low angle, high detail, warm lighting, volumetric, godrays, vivid, beautiful, Bottom right: (digital painting:1.3), cartoon, trending on artstation, close up of pretty cute Swedish girl, centered, (messy bun), blue eyes, pale skin, behind teal mountains, snow, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3) ![Even More Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/z8LKdrsz9FibZv_qq7Qwb.png) (Click for larger) Top left: Romanticism In Photography The Beauty Grandeur And behind trees Of Nature The Suggestion Of The Divine In The Light And Nature Photos Nature Photography Nature, wallpaper hd, stunning photorealistic painting, photoshop, divine night sky,1920x1080 Top right: studio medium of glacial Temple candid, detailed portrait, film, studio lighting, detailed iris, symmetrical circular eyes Bottom left: beach, city, romantic sillhouettes Bottom right: intricate alligators ship under a vast magical starry sky with eclipse, detailed, wallpaper, 1920x1080, hd, desktop background, vivid, Blue Night Star Dream Backdrop Original pages: https://civitai.com/models/153869/fenn-photo https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/ ![Going Crazy With The Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/F8uMQI7UzVKstdQZ3nMNK.png) (Click for larger) Top left: a pretty cute indian girl wearing an apron. sunset Top right: a PEACEFUL of a beautiful young girl with cleavage. Skirt Bottom left: astronaut girl walking with gorilla, centered, (messy bun), pale skin, behind glacial mountains, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3) Bottom right: a Cooking of a beautiful young cute girl ![Too Many Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/C7NxnOrB85rP-w0HM_ZJN.png) (Click for larger) Top left: healthy beet juice cherries smoothie Top right: full grill full of meat and artstation. fire Bottom left: magic sushi, behind the mountains Bottom right: chocolate popsicle surrounded by Shirley sprinkles ![Just Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/KjsWGsQgkr3cHRhHQyIgZ.png) (Click for larger) Top left: centered, (messy bun), pale skin, behind glacial mountains, a cute red, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3) Top right: close up pretty cute girl ballerina from the nutcracker dancing in a magical fantasy winter. ocean Bottom left: a pretty cute girl with long curly blonde hair, detailed face, holding her hand up, northern sky, walking by the ocean, blue sky, vast clouds Bottom right: a pretty cute girl with eyes closed, riding her bike down the city streets of japan, panda hour ![More Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/AASO215KR6R_I9pZFUkxv.png) Top left: ladies as close Catwoman and Harley Quinn from the 2004 movie. up, medieval in cool armor, action scene, in a wonderland land Top right: digital painting of a neoclassical painting with a golden sunset Bottom left: an amazing close up photo of a detailed Afrikaan porsche 911 on a curvy, asphalt road, mountain Bottom right: close up of two pretty cute young girls, indian wearing a red dress, centered, little sunset friend with long hair, behind busy street, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3) * - *Examples weren't really generated in real time, I already did this joke, but what if you missed the other time? # Recipe: - SuperMerger Weight sum Train Difference Use MBW 1,0,0,0,1,1,1,0,1,0,0,1,1,0,1,1,0,0,1,0,1,1,1,0,0,0 Model A: FennPhoto Model B: FotoAssistedDiffusion Output Model: FotoPhoto
Hello-SimpleAI/chatgpt-detector-roberta
Hello-SimpleAI
"2023-01-19T11:03:04Z"
10,633
46
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "chatgpt", "en", "dataset:Hello-SimpleAI/HC3", "arxiv:2301.07597", "doi:10.57967/hf/1203", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-18T16:38:53Z"
--- datasets: - Hello-SimpleAI/HC3 language: - en pipeline_tag: text-classification tags: - chatgpt --- # Model Card for `Hello-SimpleAI/chatgpt-detector-roberta` This model is trained on **the mix of full-text and splitted sentences** of `answer`s from [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3). More details refer to [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) and Gtihub project [Hello-SimpleAI/chatgpt-comparison-detection](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection). The base checkpoint is [roberta-base](https://huggingface.co/roberta-base). We train it with all [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) data (without held-out) for 1 epoch. (1-epoch is consistent with the experiments in [our paper](https://arxiv.org/abs/2301.07597).) ## Citation Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) ``` @article{guo-etal-2023-hc3, title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection", author = "Guo, Biyang and Zhang, Xin and Wang, Ziyuan and Jiang, Minqi and Nie, Jinran and Ding, Yuxuan and Yue, Jianwei and Wu, Yupeng", journal={arXiv preprint arxiv:2301.07597} year = "2023", } ```
RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf
RichardErkhov
"2024-06-25T16:55:10Z"
10,633
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T12:38:18Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-v0.8 - GGUF - Model creator: https://huggingface.co/MaziyarPanahi/ - Original model: https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.8/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Instruct-v0.8.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Instruct-v0.8.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Instruct-v0.8.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Instruct-v0.8.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Instruct-v0.8.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Instruct-v0.8.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Instruct-v0.8.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Instruct-v0.8.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Instruct-v0.8.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Instruct-v0.8.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Instruct-v0.8.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Instruct-v0.8.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Instruct-v0.8.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Instruct-v0.8.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Instruct-v0.8.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Instruct-v0.8.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Instruct-v0.8.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Instruct-v0.8.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Instruct-v0.8.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Instruct-v0.8.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Instruct-v0.8.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-8B-Instruct-v0.8.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Llama-3-8B-Instruct-v0.8-gguf/blob/main/Llama-3-8B-Instruct-v0.8.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en license: other library_name: transformers tags: - axolotl - finetune - facebook - meta - pytorch - llama - llama-3 base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.4 model_name: Llama-3-8B-Instruct-v0.8 pipeline_tag: text-generation license_name: llama3 license_link: LICENSE inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi model-index: - name: Llama-3-8B-Instruct-v0.8 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.67 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 68.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.9 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-8B-Instruct-v0.8 name: Open LLM Leaderboard --- <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Llama-3-8B-Instruct-v0.8 This model was developed based on `MaziyarPanahi/Llama-3-8B-Instruct-v0.4` model. # ⚡ Quantized GGUF All GGUF models are available here: [MaziyarPanahi/Llama-3-8B-Instruct-v0.8-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.8-GGUF) # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-8B-Instruct-v0.8) | Metric |Value| |---------------------------------|----:| |Avg. |73.20| |AI2 Reasoning Challenge (25-Shot)|71.67| |HellaSwag (10-Shot) |87.77| |MMLU (5-Shot) |68.30| |TruthfulQA (0-shot) |63.90| |Winogrande (5-shot) |79.08| |GSM8k (5-shot) |68.46| `MaziyarPanahi/Llama-3-8B-Instruct-v0.8` is the 5th best-performing 8B model on the Open LLM Leaderboard. (03/06/2024). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/ExIVXtyzYIYgilY_MxAPY.png) # Prompt Template This model uses `ChatML` prompt template: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ```` # How to use You can use this model by using `MaziyarPanahi/Llama-3-8B-Instruct-v0.8` as the model name in Hugging Face's transformers library. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer from transformers import pipeline import torch model_id = "MaziyarPanahi/Llama-3-8B-Instruct-v0.8" model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, # attn_implementation="flash_attention_2" ) tokenizer = AutoTokenizer.from_pretrained( model_id, trust_remote_code=True ) streamer = TextStreamer(tokenizer) pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, model_kwargs={"torch_dtype": torch.bfloat16}, streamer=streamer ) # Then you can use the pipeline to generate text. messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=512, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.95, ) print(outputs[0]["generated_text"][len(prompt):]) ```
facebook/deit-base-patch16-384
facebook
"2022-07-13T11:41:03Z"
10,632
1
transformers
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification datasets: - imagenet-1k --- # Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-384') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | **DeiT-base 384** | **82.9** | **96.2** | **87M** | **https://huggingface.co/facebook/deit-base-patch16-384** | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
TheBloke/Phind-CodeLlama-34B-v2-GGUF
TheBloke
"2023-09-27T12:46:32Z"
10,623
160
transformers
[ "transformers", "gguf", "llama", "code llama", "base_model:Phind/Phind-CodeLlama-34B-v2", "license:llama2", "model-index", "text-generation-inference", "region:us" ]
null
"2023-08-29T06:53:42Z"
--- license: llama2 tags: - code llama base_model: Phind/Phind-CodeLlama-34B-v2 inference: false model_creator: Phind model_type: llama prompt_template: '### System Prompt {system_message} ### User Message {prompt} ### Assistant ' quantized_by: TheBloke model-index: - name: Phind-CodeLlama-34B-v1 results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: pass@1 value: 73.8% name: pass@1 verified: false --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 34B v2 - GGUF - Model creator: [Phind](https://huggingface.co/Phind) - Original model: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) <!-- description start --> ## Description This repo contains GGUF format model files for [Phind's CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF) * [Phind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Phind ``` ### System Prompt {system_message} ### User Message {prompt} ### Assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [phind-codellama-34b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes | | [phind-codellama-34b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss | | [phind-codellama-34b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss | | [phind-codellama-34b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss | | [phind-codellama-34b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [phind-codellama-34b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss | | [phind-codellama-34b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended | | [phind-codellama-34b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [phind-codellama-34b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended | | [phind-codellama-34b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended | | [phind-codellama-34b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss | | [phind-codellama-34b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Phind-CodeLlama-34B-v2-GGUF and below it, a specific filename to download, such as: phind-codellama-34b-v2.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Phind-CodeLlama-34B-v2-GGUF phind-codellama-34b-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Phind-CodeLlama-34B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Phind-CodeLlama-34B-v2-GGUF phind-codellama-34b-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m phind-codellama-34b-v2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System Prompt\n{system_message}\n\n### User Message\n{prompt}\n\n### Assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Phind-CodeLlama-34B-v2-GGUF", model_file="phind-codellama-34b-v2.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Phind's CodeLlama 34B v2 # **Phind-CodeLlama-34B-v2** We've fine-tuned Phind-CodeLlama-34B-v1 on an additional 1.5B tokens high-quality programming-related data, achieving **73.8% pass@1** on HumanEval. It's the current state-of-the-art amongst open-source models. Furthermore, this model is **instruction-tuned** on the Alpaca/Vicuna format to be steerable and easy-to-use. More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4). ## Model Details This model is fine-tuned from Phind-CodeLlama-34B-v1 and achieves **73.8% pass@1** on HumanEval. Phind-CodeLlama-34B-v2 is **multi-lingual** and is proficient in Python, C/C++, TypeScript, Java, and more. ## Dataset Details We fined-tuned on a proprietary dataset of 1.5B tokens of high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in 15 hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens. ## How to Get Started with the Model Make sure to install Transformers from the main git branch: ```bash pip install git+https://github.com/huggingface/transformers.git ``` ## How to Prompt the Model This model accepts the Alpaca/Vicuna instruction format. For example: ``` ### System Prompt You are an intelligent programming assistant. ### User Message Implement a linked list in C++ ### Assistant ... ``` ## How to reproduce HumanEval Results To reproduce our results: ```python from transformers import AutoTokenizer, LlamaForCausalLM from human_eval.data import write_jsonl, read_problems from tqdm import tqdm # initialize the model model_path = "Phind/Phind-CodeLlama-34B-v2" model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_path) # HumanEval helper def generate_one_completion(prompt: str): tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096) # Generate generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=384, do_sample=True, top_p=0.75, top_k=40, temperature=0.1) completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] completion = completion.replace(prompt, "").split("\n\n\n")[0] return completion # perform HumanEval problems = read_problems() num_samples_per_task = 1 samples = [ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"])) for task_id in tqdm(problems) for _ in range(num_samples_per_task) ] write_jsonl("samples.jsonl", samples) # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Training details <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 32x A100-80GB - **Hours used:** 480 GPU-hours - **Cloud Provider:** AWS - **Compute Region:** us-east-1 <!-- original-model-card end -->
cahya/bert-base-indonesian-NER
cahya
"2023-11-03T16:02:19Z"
10,620
10
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- license: mit language: - id pipeline_tag: token-classification ---
unsloth/tinyllama-chat-bnb-4bit
unsloth
"2024-03-22T15:18:57Z"
10,620
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "tinyllama", "bnb", "chat", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-02-14T15:15:09Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - tinyllama - bnb - chat --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
mradermacher/Code-Llama-Bagel-8B-i1-GGUF
mradermacher
"2024-06-22T10:31:40Z"
10,620
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "theprint/Code-Llama-Bagel-8B", "ajibawa-2023/Code-Llama-3-8B", "jondurbin/bagel-8b-v1.0", "en", "base_model:theprint/Code-Llama-Bagel-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-22T04:51:56Z"
--- base_model: theprint/Code-Llama-Bagel-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - theprint/Code-Llama-Bagel-8B - ajibawa-2023/Code-Llama-3-8B - jondurbin/bagel-8b-v1.0 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/theprint/Code-Llama-Bagel-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-Bagel-8B-i1-GGUF/resolve/main/Code-Llama-Bagel-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
google/tapas-tiny-finetuned-wtq
google
"2021-11-29T10:45:11Z"
10,613
1
transformers
[ "transformers", "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wtq", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1508.00305", "license:apache-2.0", "endpoints_compatible", "region:us" ]
table-question-answering
"2022-03-02T23:29:05Z"
--- language: en tags: - tapas - table-question-answering license: apache-2.0 datasets: - wtq --- # TAPAS tiny model fine-tuned on WikiTable Questions (WTQ) This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_tiny` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Results Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset) LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main) BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset) BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main) MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset) MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main) SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset) SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main) MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset) MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main) **TINY** | **noreset** | **0.0823** | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset) **TINY** | **reset** | **0.1039** | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main) ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/PasupatL15, author = {Panupong Pasupat and Percy Liang}, title = {Compositional Semantic Parsing on Semi-Structured Tables}, journal = {CoRR}, volume = {abs/1508.00305}, year = {2015}, url = {http://arxiv.org/abs/1508.00305}, archivePrefix = {arXiv}, eprint = {1508.00305}, timestamp = {Mon, 13 Aug 2018 16:47:37 +0200}, biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cross-encoder/ms-marco-TinyBERT-L-6
cross-encoder
"2021-08-05T08:40:06Z"
10,608
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF
mradermacher
"2024-06-25T12:28:19Z"
10,605
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "nsfw", "rp", "roleplay", "role-play", "en", "base_model:Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-25T03:27:45Z"
--- base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - mergekit - merge - not-for-all-audiences - nsfw - rp - roleplay - role-play --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
flaubert/flaubert_small_cased
flaubert
"2024-05-14T12:38:09Z"
10,595
2
transformers
[ "transformers", "pytorch", "safetensors", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "flaubert-small", "cased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: fr license: mit datasets: - flaubert metrics: - flue tags: - bert - language-model - flaubert - flue - french - flaubert-small - cased --- # FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
NousResearch/Nous-Hermes-Llama2-13b
NousResearch
"2024-04-23T23:18:53Z"
10,594
300
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-20T23:25:25Z"
--- language: - en tags: - llama-2 - self-instruct - distillation - synthetic instruction license: - mit --- # Model Card: Nous-Hermes-Llama2-13b Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI. ## Model Description Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable. This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine. ## Example Outputs: ![Example4](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example5.png "Example 4") ![Example1](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/Example1.png "Example 1") ![Example2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example2.png "Example 2") ![Example3](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example3.png "Example 3") ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI. Special mention goes to @winglian for assisting in some of the training issues. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Among the contributors of datasets: - GPTeacher was made available by Teknium - Wizard LM by nlpxucan - Nous Research Instruct Dataset was provided by Karan4D and HueminArt. - GPT4-LLM and Unnatural Instructions were provided by Microsoft - Airoboros dataset by jondurbin - Camel-AI's domain expert datasets are from Camel-AI - CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` or ``` ### Instruction: <prompt> ### Input: <additional context> ### Response: <leave a newline blank for model to respond> ``` ## Benchmark Results AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |agieval_aqua_rat | 0|acc |0.2362|± |0.0267| | | |acc_norm|0.2480|± |0.0272| |agieval_logiqa_en | 0|acc |0.3425|± |0.0186| | | |acc_norm|0.3472|± |0.0187| |agieval_lsat_ar | 0|acc |0.2522|± |0.0287| | | |acc_norm|0.2087|± |0.0269| |agieval_lsat_lr | 0|acc |0.3510|± |0.0212| | | |acc_norm|0.3627|± |0.0213| |agieval_lsat_rc | 0|acc |0.4647|± |0.0305| | | |acc_norm|0.4424|± |0.0303| |agieval_sat_en | 0|acc |0.6602|± |0.0331| | | |acc_norm|0.6165|± |0.0340| |agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346| | | |acc_norm|0.4272|± |0.0345| |agieval_sat_math | 0|acc |0.2909|± |0.0307| | | |acc_norm|0.2727|± |0.0301| ``` GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |arc_challenge| 0|acc |0.5102|± |0.0146| | | |acc_norm|0.5213|± |0.0146| |arc_easy | 0|acc |0.7959|± |0.0083| | | |acc_norm|0.7567|± |0.0088| |boolq | 1|acc |0.8394|± |0.0064| |hellaswag | 0|acc |0.6164|± |0.0049| | | |acc_norm|0.8009|± |0.0040| |openbookqa | 0|acc |0.3580|± |0.0215| | | |acc_norm|0.4620|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8069|± |0.0092| |winogrande | 0|acc |0.7127|± |0.0127| ``` BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362| |bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192| |bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111| |bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123| |bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287| ``` These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores: - GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1 - 0.3657 on BigBench, up from 0.328 on hermes-llama1 - 0.372 on AGIEval, up from 0.354 on Hermes-llama1 These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position. ## Resources for Applied Use Cases: Check out LM Studio for a nice chatgpt style interface here: https://lmstudio.ai/ For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
mradermacher/Symbol-LLM-13B-Instruct-GGUF
mradermacher
"2024-06-23T20:56:37Z"
10,593
0
transformers
[ "transformers", "gguf", "en", "base_model:Symbol-LLM/Symbol-LLM-13B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T15:14:46Z"
--- base_model: Symbol-LLM/Symbol-LLM-13B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Symbol-LLM/Symbol-LLM-13B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
zhiqiulin/clip-flant5-xl
zhiqiulin
"2023-12-14T07:37:50Z"
10,590
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-12-13T07:48:50Z"
Entry not found
DTAI-KULeuven/robbert-2022-dutch-base
DTAI-KULeuven
"2023-11-29T10:55:44Z"
10,587
9
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "Dutch", "Flemish", "RoBERTa", "RobBERT", "nl", "dataset:oscar", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2211.08192", "arxiv:2001.06286", "arxiv:1907.11692", "arxiv:2001.02943", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-08-15T09:48:36Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_2022_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT license: mit datasets: - oscar - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Hallo, ik ben RobBERT-2022, het nieuwe <mask> taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_2022_logo_with_name.png" alt="RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use" width="75%"> </p> # RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use. RobBERT-2022 is the latest release of the [Dutch RobBERT model](https://pieter.ai/robbert/). It further pretrained the original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) model on the 2022 version of the OSCAR version. Thanks to this more recent dataset, this [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base) model shows increased performance on several tasks related to recent events, e.g. COVID-19-related tasks. We also found that for some tasks that do not contain more recent information than 2019, the original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) RobBERT model can still outperform this newer one. The original RobBERT model was released in January 2020. Dutch has evolved a lot since then, for example the COVID-19 pandemic introduced a wide range of new words that were suddenly used daily. Also, many other world facts that the original model considered true have also changed. To account for this and other changes in usage, we release a new Dutch BERT model trained on data from 2022: RobBERT 2022. More in-depth information about RobBERT-2022 can be found in our [blog post](https://pieter.ai/robbert-2022/), [our paper](http://arxiv.org/abs/2211.08192), [the original RobBERT paper](https://arxiv.org/abs/2001.06286) and [the RobBERT Github repository](https://github.com/iPieter/RobBERT). ## How to use RobBERT-2022 and RobBERT both use the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using [code to finetune RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) models and most code used for BERT models, e.g. as provided by [HuggingFace Transformers](https://huggingface.co/transformers/) library. By default, RobBERT-2022 has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `DTAI-KULeuven/robbert-2022-dutch-base`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base") model = AutoModelForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base") ``` You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset. ## Comparison of Available Dutch BERT models There is a wide variety of Dutch BERT-based models available for fine-tuning on your tasks. Here's a quick summary to find the one that suits your need: - [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base): The RobBERT model has for years been the best performing BERT-like model for most language tasks. It is trained on a large Dutch webcrawled dataset (OSCAR) and uses the superior [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) architecture, which robustly optimized the original [BERT model](https://huggingface.co/docs/transformers/model_doc/bert). - [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-mergedRobBERTje): The RobBERTje model is a distilled version of RobBERT and about half the size and four times faster to perform inference on. This can help deploy more scalable language models for your language task - [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base): The RobBERT-2022 is a further pre-trained RobBERT model on the OSCAR2022 dataset. It is helpful for tasks that rely on words and/or information about more recent events. There's also the [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) "BERTje" model. This model uses the outdated basic BERT model, and is trained on a smaller corpus of clean Dutch texts. Thanks to RobBERT's more recent architecture as well as its larger and more real-world-like training corpus, most researchers and practitioners seem to achieve higher performance on their language tasks with the RobBERT model. ## Technical Details From The Paper ### Our Performance Evaluation Results All experiments are described in more detail in our [paper](https://arxiv.org/abs/2001.06286), with the code in [our GitHub repository](https://github.com/iPieter/RobBERT). ### Sentiment analysis Predicting whether a review is positive or negative using the [Dutch Book Reviews Dataset](https://github.com/benjaminvdb/110kDBRD). | Model | Accuracy [%] | |-------------------|--------------------------| | ULMFiT | 93.8 | | BERTje | 93.0 | | RobBERT v2 | 94.4 | | RobBERT 2022 | **95.1** | ### Die/Dat (coreference resolution) We measured how well the models are able to do coreference resolution by predicting whether "die" or "dat" should be filled into a sentence. For this, we used the [EuroParl corpus](https://www.statmt.org/europarl/). #### Finetuning on whole dataset | Model | Accuracy [%] | F1 [%] | |-------------------|--------------------------|--------------| | [Baseline](https://arxiv.org/abs/2001.02943) (LSTM) | | 75.03 | | mBERT | 98.285 | 98.033 | | BERTje | 98.268 | 98.014 | | RobBERT v2 | **99.232** | **99.121** | | RobBERT 2022 | 97.8 | | #### Finetuning on 10K examples We also measured the performance using only 10K training examples. This experiment clearly illustrates that RobBERT outperforms other models when there is little data available. | Model | Accuracy [%] | F1 [%] | |-------------------|--------------------------|--------------| | mBERT | 92.157 | 90.898 | | BERTje | 93.096 | 91.279 | | RobBERT v2 | **97.816** | **97.514** | #### Using zero-shot word masking task Since BERT models are pre-trained using the word masking task, we can use this to predict whether "die" or "dat" is more likely. This experiment shows that RobBERT has internalised more information about Dutch than other models. | Model | Accuracy [%] | |-------------------|--------------------------| | ZeroR | 66.70 | | mBERT | 90.21 | | BERTje | 94.94 | | RobBERT v2 | **98.75** | ### Part-of-Speech Tagging. Using the [Lassy UD dataset](https://universaldependencies.org/treebanks/nl_lassysmall/index.html). | Model | Accuracy [%] | |-------------------|--------------------------| | Frog | 91.7 | | mBERT | **96.5** | | BERTje | 96.3 | | RobBERT v2 | 96.4 | | RobBERT 2022 | 96.1 | ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or model, you can use the following BibTeX: ``` @inproceedings{delobelle2022robbert2022, doi = {10.48550/ARXIV.2211.08192}, url = {https://arxiv.org/abs/2211.08192}, author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use}, venue = {arXiv}, year = {2022}, } @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
tk93/V-Express
tk93
"2024-06-15T12:36:06Z"
10,585
77
diffusers
[ "diffusers", "onnx", "text-to-image", "stable-diffusion", "audio-to-video", "en", "arxiv:2406.02511", "license:apache-2.0", "region:us" ]
text-to-image
"2024-05-23T07:02:07Z"
--- tags: - text-to-image - stable-diffusion - audio-to-video license: apache-2.0 language: - en library_name: diffusers --- # V-Express Model Card <div align="center"> [**Project Page**](https://tenvence.github.io/p/v-express/) **|** [**Paper**](https://arxiv.org/abs/2406.02511) **|** [**Code**](https://github.com/tencent-ailab/V-Express) </div> --- ## Introduction ## Models ### Audio Encoder - [model_ckpts/wav2vec2-base-960h](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/wav2vec2-base-960h). (It is also available from the original model card [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h)) ### Face Analysis - [model_ckpts/insightface_models/models/buffalo_l](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/insightface_models/models/buffalo_l). (It is also available from the original repository [insightface/buffalo_l](https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip)) ### V-Express - [model_ckpts/sd-vae-ft-mse](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/sd-vae-ft-mse). VAE encoder. (original model card [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)) - [model_ckpts/stable-diffusion-v1-5](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/stable-diffusion-v1-5). Only the model configuration file for unet is needed here. (original model card [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)) - [model_ckpts/v-express](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/v-express). The video generation model conditional on audio and V-kps we call V-Express. - You should download and put all `.bin` model to `model_ckpts/v-express` directory, which includes `audio_projection.bin`, `denoising_unet.bin`, `motion_module.bin`, `reference_net.bin`, and `v_kps_guider.bin`.
mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF
mradermacher
"2024-06-20T18:47:54Z"
10,576
0
transformers
[ "transformers", "gguf", "en", "base_model:yhzhang3/NeuralHermes-2.5-Mistral-7B", "endpoints_compatible", "region:us" ]
null
"2024-06-20T18:21:32Z"
--- base_model: yhzhang3/NeuralHermes-2.5-Mistral-7B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/yhzhang3/NeuralHermes-2.5-Mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NeuralHermes-2.5-Mistral-7B-GGUF/resolve/main/NeuralHermes-2.5-Mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->