modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
QuantFactory/deepseek-coder-6.7b-instruct-GGUF
QuantFactory
"2024-06-09T13:19:24Z"
2,915
0
null
[ "gguf", "text-generation", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "region:us" ]
text-generation
"2024-06-09T05:35:48Z"
--- license: other license_name: deepseek license_link: LICENSE base_model: deepseek-ai/deepseek-coder-6.7b-instruct pipeline_tag: text-generation --- # QuantFactory/deepseek-coder-6.7b-instruct-GGUF This is quantized version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) created using llama.cpp # Model Description <p align="center"> <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[๐Ÿ Homepage]</a> | <a href="https://coder.deepseek.com/">[๐Ÿค– Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(ๅพฎไฟก)]</a> </p> <hr> ### 1. Introduction of Deepseek Coder Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. - **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. - **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. - **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. ### 2. Model Summary deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
ielabgroup/vec2text_gtr-base-st_inversion
ielabgroup
"2024-01-28T23:35:41Z"
2,914
0
transformers
[ "transformers", "safetensors", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-01-28T03:46:53Z"
--- license: apache-2.0 --- To use the model, check vec2text repo [https://github.com/jxmorris12/vec2text](https://github.com/jxmorris12/vec2text) # Example: ```python from sentence_transformers import SentenceTransformer import vec2text import transformers inversion_model = vec2text.models.InversionModel.from_pretrained( "ielabgroup/vec2text_gtr-base-st_inversion" ) model = vec2text.models.CorrectorEncoderModel.from_pretrained( "ielabgroup/vec2text_gtr-base-st_corrector" ) inversion_trainer = vec2text.trainers.InversionTrainer( model=inversion_model, train_dataset=None, eval_dataset=None, data_collator=transformers.DataCollatorForSeq2Seq( inversion_model.tokenizer, label_pad_token_id=-100, ), ) model.config.dispatch_batches = None corrector = vec2text.trainers.Corrector( model=model, inversion_trainer=inversion_trainer, args=None, data_collator=vec2text.collator.DataCollatorForCorrection( tokenizer=inversion_trainer.model.tokenizer ), ) model = SentenceTransformer('sentence-transformers/gtr-t5-base') embeddings = model.encode([ "Jack Morris is a PhD student at Cornell Tech in New York City", "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity" ], convert_to_tensor=True,).to('mps') vec2text.invert_embeddings( embeddings=embeddings, corrector=corrector, num_steps=20, ) [' Jack Morris is a PhD student at Cornell Tech in New York', 'It was the best of times, it was the worst of times, it was the epoch of incredulity, it was age of'] ```
mradermacher/Falcon2-8B-Czech-GGUF
mradermacher
"2024-06-05T17:10:34Z"
2,913
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "cs", "base_model:ssmits/Falcon2-8B-Czech", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T16:41:15Z"
--- base_model: ssmits/Falcon2-8B-Czech language: - cs library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ssmits/Falcon2-8B-Czech <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.IQ3_XS.gguf) | IQ3_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q5_K_M.gguf) | Q5_K_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Czech-GGUF/resolve/main/Falcon2-8B-Czech.f16.gguf) | f16 | 16.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
oh-yeontaek/llama-2-13B-LoRA-assemble
oh-yeontaek
"2023-09-13T21:39:41Z"
2,912
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-13T20:53:27Z"
Entry not found
mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF
mradermacher
"2024-06-11T06:17:24Z"
2,912
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TroyDoesAI/Mermaid-Flow-MoE-Expert2", "endpoints_compatible", "region:us" ]
null
"2024-06-10T20:27:38Z"
--- base_model: TroyDoesAI/Mermaid-Flow-MoE-Expert2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TroyDoesAI/Mermaid-Flow-MoE-Expert2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf
RichardErkhov
"2024-06-29T14:25:42Z"
2,912
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
"2024-06-29T13:57:10Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Cinder-1.3B-Test - GGUF - Model creator: https://huggingface.co/Josephgflowers/ - Original model: https://huggingface.co/Josephgflowers/Cinder-1.3B-Test/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Cinder-1.3B-Test.Q2_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q2_K.gguf) | Q2_K | 0.46GB | | [Cinder-1.3B-Test.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.IQ3_XS.gguf) | IQ3_XS | 0.51GB | | [Cinder-1.3B-Test.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.IQ3_S.gguf) | IQ3_S | 0.54GB | | [Cinder-1.3B-Test.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q3_K_S.gguf) | Q3_K_S | 0.54GB | | [Cinder-1.3B-Test.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.IQ3_M.gguf) | IQ3_M | 0.56GB | | [Cinder-1.3B-Test.Q3_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q3_K.gguf) | Q3_K | 0.59GB | | [Cinder-1.3B-Test.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q3_K_M.gguf) | Q3_K_M | 0.59GB | | [Cinder-1.3B-Test.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q3_K_L.gguf) | Q3_K_L | 0.64GB | | [Cinder-1.3B-Test.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.IQ4_XS.gguf) | IQ4_XS | 0.66GB | | [Cinder-1.3B-Test.Q4_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q4_0.gguf) | Q4_0 | 0.69GB | | [Cinder-1.3B-Test.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.IQ4_NL.gguf) | IQ4_NL | 0.69GB | | [Cinder-1.3B-Test.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q4_K_S.gguf) | Q4_K_S | 0.69GB | | [Cinder-1.3B-Test.Q4_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q4_K.gguf) | Q4_K | 0.72GB | | [Cinder-1.3B-Test.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q4_K_M.gguf) | Q4_K_M | 0.72GB | | [Cinder-1.3B-Test.Q4_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q4_1.gguf) | Q4_1 | 0.76GB | | [Cinder-1.3B-Test.Q5_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q5_0.gguf) | Q5_0 | 0.83GB | | [Cinder-1.3B-Test.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q5_K_S.gguf) | Q5_K_S | 0.83GB | | [Cinder-1.3B-Test.Q5_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q5_K.gguf) | Q5_K | 0.85GB | | [Cinder-1.3B-Test.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q5_K_M.gguf) | Q5_K_M | 0.85GB | | [Cinder-1.3B-Test.Q5_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q5_1.gguf) | Q5_1 | 0.9GB | | [Cinder-1.3B-Test.Q6_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q6_K.gguf) | Q6_K | 0.98GB | | [Cinder-1.3B-Test.Q8_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Cinder-1.3B-Test-gguf/blob/main/Cinder-1.3B-Test.Q8_0.gguf) | Q8_0 | 1.26GB | Original model description: --- license: mit library_name: transformers base_model: Josephgflowers/TinyLlama-3T-Cinder-v1.2 pipeline_tag: text-generation widget: - text: '<|system|> You are a helpful assistant</s> <|user|> Tell me about yourself, what is your name?.</s> <|assistant|>' model-index: - name: Cinder-1.3B-Test results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 33.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 55.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 36.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 58.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 2.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Cinder-1.3B-Test name: Open LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test) | Metric |Value| |---------------------------------|----:| |Avg. |35.57| |AI2 Reasoning Challenge (25-Shot)|33.19| |HellaSwag (10-Shot) |55.48| |MMLU (5-Shot) |26.37| |TruthfulQA (0-shot) |36.62| |Winogrande (5-shot) |58.96| |GSM8k (5-shot) | 2.81|
yeen214/test_llama2_7b
yeen214
"2023-10-01T02:38:15Z"
2,911
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-30T08:45:19Z"
Entry not found
TheBloke/orca_mini_v3_7B-GPTQ
TheBloke
"2023-09-27T12:45:35Z"
2,910
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:psmathur/orca_mini_v1_dataset", "dataset:ehartford/dolphin", "arxiv:2306.02707", "base_model:psmathur/orca_mini_v3_7b", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-08-10T10:12:19Z"
--- language: - en license: other library_name: transformers datasets: - psmathur/orca_mini_v1_dataset - ehartford/dolphin model_name: Orca Mini v3 7B base_model: psmathur/orca_mini_v3_7b inference: false model_creator: Pankaj Mathur model_type: llama pipeline_tag: text-generation prompt_template: '### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: {prompt} ### Input: {input} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Orca Mini v3 7B - GPTQ - Model creator: [Pankaj Mathur](https://huggingface.co/psmathur) - Original model: [Orca Mini v3 7B](https://huggingface.co/psmathur/orca_mini_v3_7b) <!-- description start --> ## Description This repo contains GPTQ model files for [Pankaj Mathur's Orca Mini v3 7B](https://huggingface.co/psmathur/orca_mini_v3_7b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/orca_mini_v3_7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v3_7B-GGUF) * [Pankaj Mathur's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v3_7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: orca_mini ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: {prompt} ### Input: {input} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Pankaj Mathur's Orca Mini v3 7B](https://huggingface.co/psmathur/orca_mini_v3_7b). <!-- licensing end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.31 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/orca_mini_v3_7B-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/orca_mini_v3_7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/orca_mini_v3_7B-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `orca_mini_v3_7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/orca_mini_v3_7B-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: {prompt} ### Input: {input} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Pankaj Mathur's Orca Mini v3 7B # orca_mini_v3_7b A LLama2-7b model trained on Orca Style datasets. <br> ![orca-mini](https://huggingface.co/psmathur/orca_mini_v3_7b/resolve/main/orca_minis_small.jpeg) <br> ๐Ÿค” How good is orca-mini-v3-7b? Do the evaluation results from HuggingFace Open LLM leaderboard translate to real-world use cases? ๐Ÿ” Now you can figure it out for yourself! Introducing the orca-mini chatbot powered by the orca-mini-v3-7b model. Dive in and see how the open source 7b model stacks up in the world of massive language models. ๐ŸŒ โฐ Hurry up before I run out of GPU credits! ๐Ÿ˜‰ Check it out here ๐Ÿ‘‰ [https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b](https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b) <br> **P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.** <br> ### quantized versions Big thanks to [@TheBloke](https://huggingface.co/TheBloke) 1) https://huggingface.co/TheBloke/orca_mini_v3_7B-GGML 2) https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ <br> #### license disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. <br> ## evaluation We evaluated orca_mini_v3_7b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) ||||| |:------:|:--------:|:-------:|:--------:| |**Task**|**Metric**|**Value**|**Stderr**| |*arc_challenge*|acc_norm|0.5717|0.0145| |*hellaswag*|acc_norm|0.7966|0.0043| |*mmlu*|acc_norm|0.5234|0.035| |*truthfulqa_mc*|mc2|0.5029|0.0156| |**Total Average**|-|**0.59865**|| <br> ## example esage Here is prompt format ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me about Orcas. ### Assistant: ``` Below shows a code example on how to use this model ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_7b", use_fast=False) model = AutoModelForCausalLM.from_pretrained( "psmathur/orca_mini_v3_7b", torch_dtype=torch.float16, load_in_8bit=True, low_cpu_mem_usage=True, device_map="auto" ) system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n" #generate text steps instruction = "Tell me about Orcas." prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <br> #### limitations & biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. <br> ### citiation: Please kindly cite using the following BibTeX: ``` @misc{orca_mini_v3_7b, author = {Pankaj Mathur}, title = {orca_mini_v3_7b: An explain tuned Llama2-7b model}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_7b}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
yeen214/llama2_7b_small_tuning_v1
yeen214
"2023-10-03T06:53:33Z"
2,910
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-02T06:48:50Z"
Entry not found
yeen214/test_llama2_ko_7b
yeen214
"2023-10-02T20:05:55Z"
2,909
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-02T03:55:19Z"
Entry not found
RichardErkhov/allenai_-_OLMo-1B-hf-gguf
RichardErkhov
"2024-06-22T19:03:55Z"
2,909
0
null
[ "gguf", "arxiv:2402.00838", "arxiv:2302.13971", "region:us" ]
null
"2024-06-22T18:47:45Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OLMo-1B-hf - GGUF - Model creator: https://huggingface.co/allenai/ - Original model: https://huggingface.co/allenai/OLMo-1B-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OLMo-1B-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q2_K.gguf) | Q2_K | 0.45GB | | [OLMo-1B-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.IQ3_XS.gguf) | IQ3_XS | 0.49GB | | [OLMo-1B-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.IQ3_S.gguf) | IQ3_S | 0.51GB | | [OLMo-1B-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q3_K_S.gguf) | Q3_K_S | 0.51GB | | [OLMo-1B-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.IQ3_M.gguf) | IQ3_M | 0.53GB | | [OLMo-1B-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q3_K.gguf) | Q3_K | 0.56GB | | [OLMo-1B-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q3_K_M.gguf) | Q3_K_M | 0.56GB | | [OLMo-1B-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q3_K_L.gguf) | Q3_K_L | 0.61GB | | [OLMo-1B-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.IQ4_XS.gguf) | IQ4_XS | 0.62GB | | [OLMo-1B-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q4_0.gguf) | Q4_0 | 0.64GB | | [OLMo-1B-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.IQ4_NL.gguf) | IQ4_NL | 0.65GB | | [OLMo-1B-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q4_K_S.gguf) | Q4_K_S | 0.65GB | | [OLMo-1B-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q4_K.gguf) | Q4_K | 0.68GB | | [OLMo-1B-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q4_K_M.gguf) | Q4_K_M | 0.68GB | | [OLMo-1B-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q4_1.gguf) | Q4_1 | 0.71GB | | [OLMo-1B-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q5_0.gguf) | Q5_0 | 0.77GB | | [OLMo-1B-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q5_K_S.gguf) | Q5_K_S | 0.77GB | | [OLMo-1B-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q5_K.gguf) | Q5_K | 0.79GB | | [OLMo-1B-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q5_K_M.gguf) | Q5_K_M | 0.79GB | | [OLMo-1B-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q5_1.gguf) | Q5_1 | 0.83GB | | [OLMo-1B-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q6_K.gguf) | Q6_K | 0.9GB | | [OLMo-1B-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/allenai_-_OLMo-1B-hf-gguf/blob/main/OLMo-1B-hf.Q8_0.gguf) | Q8_0 | 1.17GB | Original model description: --- license: apache-2.0 datasets: - allenai/dolma language: - en --- <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 1B <!-- Provide a quick summary of what the model is/does. --> OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models. This model has been converted from [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) for the Hugging Face Transformers format. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 | | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 | We are releasing many checkpoints for these models, for every 1000 training steps. These have not yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Feb./March 2023 based on Dolma dataset version. ### Model Sources <!-- Provide the basic links for the model. --> - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580 - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-1B/reports/OLMo-1B--Vmlldzo2NzY1Njk1 <!-- - **Press release:** TODO --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Inference Quickly get inference running with the following: ```python from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf") tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False) # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> 'Language modeling is the first step to build natural language generation...' ``` Alternatively, with the pipeline abstraction: ```python from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B-hf") print(olmo_pipe("Language modeling is ")) >> 'Language modeling is a branch of natural language processing that aims to...' ``` Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. ### Fine-tuning This model does not directly support our fine-tuning processes. Model fine-tuning can be done from the final checkpoint or many intermediate checkpoints of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the 7B model are found below. | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) | | --------------------------------- | -------- | ---------- | --------- | ------ | ------- | | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 | | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 | | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 | | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 | | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 | | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 | | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 | | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 | | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 | | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 | | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 | | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) | | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 | And for the 1B model: | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) | | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | | copa | 50 | 84 | 72 | 78 | 79 | | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation. ### Architecture OLMo 7B architecture with peer models for comparison. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B | |------------------------|-------------------|---------------------|--------------------|--------------------|------------------| | d_model | 4096 | 4096 | 4096 | 4544 | 4096 | | num heads | 32 | 32 | 32 | 71 | 16 | | num layers | 32 | 32 | 32 | 32 | 32 | | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 | | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN | | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | | attention variant | full | GQA | full | MQA | MQA | | biases | none | none | in LN only | in LN only | none | | block type | sequential | sequential | sequential | parallel | parallel | | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU | | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 | | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 | | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M | | weight tying | no | no | no | no | yes | ### Hyperparameters AdamW optimizer parameters are shown below. | Size | Peak LR | Betas | Epsilon | Weight Decay | |------|------------|-----------------|-------------|--------------| | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 | | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 | Optimizer settings comparison with peer models. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 5000 | 2000 | 2000 | 1000 | | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 | | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 | | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | | beta1 | 0.9 | 0.9 | 0.9 | 0.99 | | beta2 | 0.95 | 0.95 | 0.95 | 0.999 | | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | | LR schedule | linear | cosine | cosine | cosine | | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 | | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 | ## Environmental Impact OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg COโ‚‚e/KWh) | Carbon Emissions (tCOโ‚‚eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* | | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.
Yntec/GimmeDatDing
Yntec
"2023-12-09T19:01:46Z"
2,908
5
diffusers
[ "diffusers", "safetensors", "Anime", "Cartoons", "Cute", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-08T07:31:09Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Cartoons - Cute - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # GimmeDatDing Samples and prompt: ![Sample 768](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/O-bCEgNxkE0jkza3-pnWF.png) ![Sample 512](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/woMBJfeqlIfy2PKnPhRlu.png) highquality, masterpiece, 1girl, Chi-Chi, :D, close up, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar
TheBloke/Llama-2-13B-GGUF
TheBloke
"2023-09-27T12:47:13Z"
2,907
60
transformers
[ "transformers", "gguf", "llama", "facebook", "meta", "pytorch", "llama-2", "text-generation", "en", "arxiv:2307.09288", "base_model:meta-llama/Llama-2-13b-hf", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-09-04T17:32:17Z"
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 model_name: Llama 2 13B base_model: meta-llama/Llama-2-13b-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 13B - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13b-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [llama-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [llama-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [llama-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [llama-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [llama-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [llama-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [llama-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [llama-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-GGUF/blob/main/llama-2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Llama-2-13B-GGUF and below it, a specific filename to download, such as: llama-2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Llama-2-13B-GGUF llama-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Llama-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-13B-GGUF llama-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-13B-GGUF", model_file="llama-2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's Llama 2 13B # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes โ€” 7B, 13B, and 70B โ€” as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโ€™s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โ€™s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software โ€œbug,โ€ or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| <!-- original-model-card end -->
TioPanda/pandev-complete
TioPanda
"2024-06-26T07:02:12Z"
2,906
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T06:33:47Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** TioPanda - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AIARTCHAN/MIX-Pro-V4
AIARTCHAN
"2023-04-06T02:07:28Z"
2,905
39
diffusers
[ "diffusers", "stable-diffusion", "aiartchan", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-04-06T01:49:01Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - aiartchan --- # MIX-Pro-V4 [์›๋ณธ๊ธ€](https://arca.live/b/aiart/73277342) [huggingface](https://huggingface.co/GIMG/AIChan_Model/tree/main/Blend/MIX-Pro/V4) [civitai](https://civitai.com/models/7241) # Download - [original 4.27GB](https://huggingface.co/GIMG/AIChan_Model/resolve/main/Blend/MIX-Pro/V4/MIX-Pro-V4.safetensors) - [fp16](https://huggingface.co/AIARTCHAN/MIX-Pro-V4/resolve/main/MIX-Pro-V4-fp16.safetensors) ## License creativeml-openrail-m + - No selling images - No generation services - No selling models ## Parameters https://huggingface.co/GIMG/AIChan_Model/tree/main/Blend/MIX-Pro/V4/Parameters ## Source https://huggingface.co/andite/mikapikazo-diffusion/blob/main/mikapikazo-40000.ckpt https://huggingface.co/andite/cutesexyrobutts-diffusion/blob/main/csrb-diffusion.ckpt https://huggingface.co/andite/piromizu-diffusion/blob/main/piromizu-20000.ckpt https://huggingface.co/andite/yohan-diffusion/blob/main/yohan-diffusion.safetensors https://huggingface.co/nuigurumi/basil_mix/blob/main/Basil%20mix.safetensors https://civitai.com/models/22607/loconlora-airconditioner-style https://civitai.com/models/14393/thick-coat-cg-style https://huggingface.co/closertodeath/mouseymix/blob/main/mouseymix.safetensors https://huggingface.co/andite/pastel-mix/blob/main/pastelmix-fp16.safetensors ![img](https://huggingface.co/GIMG/AIChan_Model/resolve/main/Blend/MIX-Pro/V4/sample/00077-59778861.png) ![img](https://huggingface.co/GIMG/AIChan_Model/resolve/main/Blend/MIX-Pro/V4/sample/00043-3045626320.png) ![img](https://huggingface.co/GIMG/AIChan_Model/resolve/main/Blend/MIX-Pro/V4/sample/00029-3470116285.png) ![img](https://huggingface.co/GIMG/AIChan_Model/resolve/main/Blend/MIX-Pro/V4/sample/00104-1755501804.png)
TheBloke/Mistral-7B-Claude-Chat-GGUF
TheBloke
"2023-10-28T23:02:14Z"
2,905
18
transformers
[ "transformers", "gguf", "mistral", "dataset:Norquinal/claude_multiround_chat_1k", "base_model:Norquinal/Mistral-7B-claude-chat", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
"2023-10-28T22:57:56Z"
--- base_model: Norquinal/Mistral-7B-claude-chat datasets: - Norquinal/claude_multiround_chat_1k inference: false license: cc-by-nc-4.0 model_creator: Andy B. Norquinal model_name: Mistral 7B Claude Chat model_type: mistral prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Claude Chat - GGUF - Model creator: [Andy B. Norquinal](https://huggingface.co/Norquinal) - Original model: [Mistral 7B Claude Chat](https://huggingface.co/Norquinal/Mistral-7B-claude-chat) <!-- description start --> ## Description This repo contains GGUF format model files for [Andy B. Norquinal's Mistral 7B Claude Chat](https://huggingface.co/Norquinal/Mistral-7B-claude-chat). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF) * [Andy B. Norquinal's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Norquinal/Mistral-7B-claude-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mistral-7b-claude-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [mistral-7b-claude-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [mistral-7b-claude-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [mistral-7b-claude-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [mistral-7b-claude-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mistral-7b-claude-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [mistral-7b-claude-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [mistral-7b-claude-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mistral-7b-claude-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [mistral-7b-claude-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [mistral-7b-claude-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [mistral-7b-claude-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Claude-Chat-GGUF/blob/main/mistral-7b-claude-chat.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Claude-Chat-GGUF and below it, a specific filename to download, such as: mistral-7b-claude-chat.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mistral-7B-Claude-Chat-GGUF mistral-7b-claude-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mistral-7B-Claude-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Claude-Chat-GGUF mistral-7b-claude-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mistral-7b-claude-chat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Claude-Chat-GGUF", model_file="mistral-7b-claude-chat.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้˜ฟๆ˜Ž, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Andy B. Norquinal's Mistral 7B Claude Chat This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on my [claude_multiround_chat_1k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset. ## Prompt Format The model was finetuned with the Vicuna 1.1 prompt format: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} ASSISTANT: ``` ## Use in Text Generation Web UI Install the bleeding-edge version of `transformers` from source: ``` pip install git+https://github.com/huggingface/transformers ``` Or, alternatively, change `model_type` in `config.json` from `mistral` to `llama`. ## Example Usage >USER: First, give a brief overview of Chinese cuisine, including its regional diversity. next, describe the importance of the five flavors in Chinese cooking. Then, mention three popular Chinese dishes and their main ingredients. After that, provide a popular Chinese cooking technique and an example of a dish prepared using that technique. Finally, discuss the role of tea in traditional Chinese culture. >ASSISTANT: Chinese cuisine is incredibly diverse due to its large geographic area with different climates and resources available. Regional specialties include Sichuan spicy hot pot, Cantonese dim sum and seafood dishes, Beijing roast duck, and Hunan stir-fries. > >The five basic tastes that guide Chinese cookery are sweetness, sourness, bitterness, saltiness, and pungency (spiciness). These contrasting flavors are used together to create balance in each dish. > >Some classic Chinese dishes include dumplings filled with meat or vegetables, spring rolls stuffed with vegetables and meat wrapped in thin pastry, and noodles served in soups or stir fried with sauces and vegetables. > >A common Chinese cooking technique is braising, where food is slowly cooked in a small amount of liquid until tender and flavorful. An example would be beef brisket braised in soy sauce, ginger and star anise for hours until falling apart. > >Tea plays a central role in Chinese culture as it represents hospitality, relaxation and socializing. Green teas are most commonly consumed but black, oolong and white varieties also exist. Tea drinking rituals involve preparing and serving tea in specific ways depending on context and occasion. > >In summary, Chinese cuisine offers a rich culinary tradition influenced by regional differences, an emphasis on balancing the five basic flavors, and a focus on simple yet flavorful preparations using fresh seasonal produce and high quality meats. Tea acts as both a refreshment and symbol of cultural refinement within Chinese society. I hope this overview provides an interesting starting point for further exploration into the fascinating world of Chinese gastronomy! <!-- original-model-card end -->
mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF
mradermacher
"2024-06-06T21:47:57Z"
2,905
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "text-generation", "roleplay", "en", "base_model:FallenMerick/Chewy-Lemon-Cookie-11B", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-06T15:50:32Z"
--- base_model: FallenMerick/Chewy-Lemon-Cookie-11B language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - mergekit - merge - mistral - text-generation - roleplay --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/FallenMerick/Chewy-Lemon-Cookie-11B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF/resolve/main/Chewy-Lemon-Cookie-11B.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
cyberagent/calm2-7b
cyberagent
"2023-11-02T05:46:18Z"
2,904
27
transformers
[ "transformers", "pytorch", "llama", "text-generation", "japanese", "causal-lm", "ja", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T07:24:59Z"
--- license: apache-2.0 language: - ja - en tags: - japanese - causal-lm inference: false --- # CyberAgentLM2-7B (CALM2-7B) ## Model Description CyberAgentLM2 is a decoder-only language model pre-trained on the 1.3T tokens of publicly available Japanese and English datasets. Variant: [CyberAgentLM2-Chat](https://huggingface.co/cyberagent/calm2-7b-chat) ## Requirements - transformers >= 4.34.1 - accelerate ## Usage ```python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer assert transformers.__version__ >= "4.34.1" model = AutoModelForCausalLM.from_pretrained("cyberagent/calm2-7b", device_map="auto", torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained("cyberagent/calm2-7b") streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "AIใซใ‚ˆใฃใฆ็ง้”ใฎๆšฎใ‚‰ใ—ใฏใ€" token_ids = tokenizer.encode(prompt, return_tensors="pt") output_ids = model.generate( input_ids=token_ids.to(model.device), max_new_tokens=100, do_sample=True, temperature=0.9, streamer=streamer, ) ``` ## Model Details * **Model size**: 7B * **Trained tokens**: 1.3T tokens * **Context length**: 4096 * **Model type**: Transformer-based Language Model * **Language(s)**: Japanese, English * **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/) * **License**: Apache-2.0 ## Author [Ryosuke Ishigami](https://huggingface.co/rishigami) ## Citations ```tex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
sapienzanlp/Minerva-3B-base-v1.0
sapienzanlp
"2024-06-11T16:03:01Z"
2,904
48
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "mistral", "text-generation", "pretrained", "it", "en", "dataset:uonlp/CulturaX", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-19T13:30:21Z"
--- license: apache-2.0 pipeline_tag: text-generation language: - it - en tags: - pretrained datasets: - uonlp/CulturaX inference: parameters: temperature: 0.5 do_sample: True widget: - text: 'La capitale dell''Italia รจ ' example_title: Example 1 - text: 'Nel mezzo del cammin di nostra vita ' example_title: Example 2 - text: 'Una cena senza vino รจ come ' example_title: Example 3 --- <div style="text-align: center; display: flex; flex-direction: column; align-items: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/5f0b462819cb630495b814d7/DVA4MnFUs3UHBnTrX9jG6.png" style="max-width: 550px; height: auto;"> </div> # Model Card for Minerva-3B-base-v1.0 Minerva is the first family of **LLMs pretrained from scratch on Italian** developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/). Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data including Italian text. * [Minerva LLMs - website](https://nlp.uniroma1.it/minerva/) ## Description This is the model card for **Minerva-3B-base-v1.0**, a 3 billion parameter model trained on 660 billion tokens (330 billion in Italian, 330 billion in English). This model is part of the Minerva LLM family: * [Minerva-350M-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-350M-base-v1.0) * [Minerva-1B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-1B-base-v1.0) * [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0) ## ๐Ÿšจโš ๏ธ๐Ÿšจ Bias, Risks, and Limitations ๐Ÿšจโš ๏ธ๐Ÿšจ *This section identifies foreseeable harms and misunderstandings.* This is a foundation model, not subject to alignment. Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Racist and sexist content - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information or historical facts as if it were factual - Generate irrelevant or repetitive outputs We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data. For more information about this issue, please refer to our survey: * [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307) ## How to use Minerva with Hugging Face transformers ```python import transformers import torch model_id = "sapienzanlp/Minerva-3B-base-v1.0" # Initialize the pipeline. pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) # Input text for the model. input_text = "La capitale dell'Italia รจ" # Compute the outputs. output = pipeline( input_text, max_new_tokens=128, ) # Output: # [{'generated_text': "La capitale dell'Italia รจ la cittร  di Roma, che si trova a [...]"}] ``` ## Model Architecture Minerva-3B-base-v1.0 is a Transformer model based on the Mistral architecture, where the number of layers, number of heads, and the hidden states dimension are modified to reach 3B parameters. Please, take a look at the configuration file for a detailed breakdown of the hyperparameters we chose for this model. The Minerva LLM family is composed of: | Model Name | Tokens | Layers | Hidden Size | Attention Heads | KV Heads | Sliding Window | Max Context Length | | --- | --- | --- | --- | --- | --- | --- | --- | | Minerva-350M-base-v1.0 | 70B (35B it + 35B en) | 16 | 1152 | 16 | 4 | 2048 | 16384 | | Minerva-1B-base-v1.0 | 200B (100B it + 100B en) | 16 | 2048 | 16 | 4 | 2048 | 16384 | | **Minerva-3B-base-v1.0** | 660B (330B it + 330B en) | 32 | 2560 | 32 | 8 | 2048 | 16384 | ## Model Training Minerva-3B-base-v1.0 was trained using [llm-foundry 0.6.0](https://github.com/mosaicml/llm-foundry) from [MosaicML](https://mosaicml.com/). The hyperparameters used are the following: | Model Name | Optimizer | lr | betas | eps | weight decay | Scheduler | Warmup Steps | Batch Size (Tokens) | Total Steps | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Minerva-350M-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 16,690 | | Minerva-1B-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 47,684 | | **Minerva-3B-base-v1.0** | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 157,357 | ## Model Evaluation We assessed our model using the [LM-Evaluation-Harness](https://github.com/EleutherAI/lm-evaluation-harness) library, which serves as a comprehensive framework for testing generative language models across a wide range of evaluation tasks. All the reported benchmark data was already present in the LM-Evaluation-Harness suite. **Italian** Data: | Task | Accuracy | | --- | --- | | [xcopa](https://huggingface.co/datasets/xcopa) (0-shot) | 0.694 | | [Hellaswag](https://huggingface.co/datasets/alexandrainst/m_hellaswag) (5-shot) | 0.5293 | | [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.2333 | | [TruthfulQA MC 1](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.2363 | | [TruthfulQA MC 2](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.3731 | | [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.2612 | | [arc challenge](https://huggingface.co/datasets/alexandrainst/m_arc) (5-shot) | 0.3268 | **English** Data: | Task | Accuracy | | --- | --- | | [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) (5-shot) | 0.6168 | | [piqa](https://huggingface.co/datasets/piqa) (5-shot) | 0.7535 | | [sciq](https://huggingface.co/datasets/sciq) (5-shot) | 0.925 | | [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.2278 | | [TruthfulQA MC 1](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.2142 | | [TruthfulQA MC 2](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.3643 | | [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.263 | | [arc challenge](allenai/ai2_arc) (5-shot) | 0.3319 | | [arc easy](allenai/ai2_arc) (5-shot) | 0.6540 | ## Training Data Minerva-3B-base-v1.0 was trained on 330B Italian tokens and 330B English tokens sampled from CulturaX. We have extracted some statistics on Italian (115B tokens) and English (210B tokens) documents from CulturaX on the selected sources: *Proportion of number of tokens per domain (Italian)* <img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_it.png?raw=true" alt="italian-tok-counts" border="0" width="1800px"> *Proportion of number of tokens per domain (English)* <img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_en.png?raw=true" alt="english-tok-counts" border="0" width="1800px"> ## Tokenizer Fertility The tokenizer fertility measures the average amount of tokens produced per tokenized word. A tokenizer displaying high fertility values in a particular language typically indicates that it segments words in that language extensively. The tokenizer fertility is strictly correlated with the inference speed of the model with respect to a specific language, as higher values mean longer sequences of tokens to generate and thus lower inference speed. **Fertility computed over a sample of Cultura X (CX) data and Wikipedia (Wp):** | Model | Voc. Size | Fertility IT (CX) | Fertility EN (CX) | Fertility IT (Wp) | Fertility EN (Wp) | | --- | --- | --- |--- | --- |--- | | Mistral-7B-v0.1 | 32000 | 1.87 | 1.32 | 2.05 | 1.57 | | gemma-7b | 256000 | 1.42 | 1.18 | 1.56 | 1.34 | | Minerva-3B-base-v1.0 | 32768 | 1.39 | 1.32 | 1.66 | 1.59 | ## Notice Minerva-3B-base-v1.0 is a pretrained base model and, therefore, has no moderation mechanisms. ## The Sapienza NLP Team * **Riccardo Orlando:** data preprocessing, model training * **Pere-Lluis Huguet Cabot:** data preprocessing, vocabulary, evaluation * **Luca Moroni:** data curation, data analysis, downstream tasks, evaluation * **Simone Conia:** data curation, evaluation, project supervision * **Edoardo Barba:** data preprocessing, downstream tasks, project supervision * **Roberto Navigli:** project coordinator ### Special thanks for their support * Giuseppe Fiameni, Nvidia * Sergio Orlandini, CINECA ## Acknowledgments This work was funded by the PNRR MUR project [PE0000013-FAIR](https://fondazione-fair.it). We acknowledge the [CINECA](https://www.cineca.it) award "IscB_medit" under the ISCRA initiative, for the availability of high performance computing resources and support.
scrapegraphai/scrapegraphcompanion_finetuning_GGUF
scrapegraphai
"2024-06-27T14:57:29Z"
2,903
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T15:49:59Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** vinci00 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
FinancialSupport/saiga-7b
FinancialSupport
"2024-03-25T21:31:58Z"
2,902
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "it", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-28T16:31:58Z"
--- language: - it license: apache-2.0 model-index: - name: saiga-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard --- il saiga รจ uno strano incrocio di antilopi che vive nelle steppe siberiane. Il nome deriva dal fatto che รจ un parente di fauno/camoscio e un lontano cugino di cerbero (altri modelli open source ita). E' un progetto portato avanti nei weekend con pochi soldi/tempo a disposizione ![image/png](https://cdn-uploads.huggingface.co/production/uploads/648cca46d38113f34bf7cb72/nqYw-P2uPLsNI8FMnLHtN.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FinancialSupport__saiga-7b) | Metric |Value| |---------------------------------|----:| |Avg. |64.51| |AI2 Reasoning Challenge (25-Shot)|63.14| |HellaSwag (10-Shot) |83.14| |MMLU (5-Shot) |61.66| |TruthfulQA (0-shot) |54.99| |Winogrande (5-shot) |79.01| |GSM8k (5-shot) |45.11|
mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF
mradermacher
"2024-06-23T12:05:53Z"
2,901
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:grimjim/Llama-3-Oasis-v1-OAS-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T05:35:44Z"
--- base_model: grimjim/Llama-3-Oasis-v1-OAS-8B language: - en library_name: transformers license: cc-by-nc-4.0 license_link: LICENSE quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF/resolve/main/Llama-3-Oasis-v1-OAS-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Helsinki-NLP/opus-mt-ine-en
Helsinki-NLP
"2023-08-16T11:58:27Z"
2,900
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "en", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - ca - es - os - ro - fy - cy - sc - is - yi - lb - an - sq - fr - ht - rm - ps - af - uk - sl - lt - bg - be - gd - si - en - br - mk - or - mr - ru - fo - co - oc - pl - gl - nb - bn - id - hy - da - gv - nl - pt - hi - as - kw - ga - sv - gu - wa - lv - el - it - hr - ur - nn - de - cs - ine tags: - translation license: apache-2.0 --- ### ine-eng * source group: Indo-European languages * target group: English * OPUS readme: [ine-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md) * model: transformer * source language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-hineng.hin.eng | 11.2 | 0.375 | | newsdev2016-enro-roneng.ron.eng | 35.5 | 0.614 | | newsdev2017-enlv-laveng.lav.eng | 25.1 | 0.542 | | newsdev2019-engu-gujeng.guj.eng | 16.0 | 0.420 | | newsdev2019-enlt-liteng.lit.eng | 24.0 | 0.522 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 30.1 | 0.550 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 33.4 | 0.572 | | newssyscomb2009-ceseng.ces.eng | 24.0 | 0.520 | | newssyscomb2009-deueng.deu.eng | 25.7 | 0.526 | | newssyscomb2009-fraeng.fra.eng | 27.9 | 0.550 | | newssyscomb2009-itaeng.ita.eng | 31.4 | 0.574 | | newssyscomb2009-spaeng.spa.eng | 28.3 | 0.555 | | news-test2008-deueng.deu.eng | 24.0 | 0.515 | | news-test2008-fraeng.fra.eng | 24.5 | 0.524 | | news-test2008-spaeng.spa.eng | 25.5 | 0.533 | | newstest2009-ceseng.ces.eng | 23.3 | 0.516 | | newstest2009-deueng.deu.eng | 23.2 | 0.512 | | newstest2009-fraeng.fra.eng | 27.3 | 0.545 | | newstest2009-itaeng.ita.eng | 30.3 | 0.567 | | newstest2009-spaeng.spa.eng | 27.9 | 0.549 | | newstest2010-ceseng.ces.eng | 23.8 | 0.523 | | newstest2010-deueng.deu.eng | 26.2 | 0.545 | | newstest2010-fraeng.fra.eng | 28.6 | 0.562 | | newstest2010-spaeng.spa.eng | 31.4 | 0.581 | | newstest2011-ceseng.ces.eng | 24.2 | 0.521 | | newstest2011-deueng.deu.eng | 23.9 | 0.522 | | newstest2011-fraeng.fra.eng | 29.5 | 0.570 | | newstest2011-spaeng.spa.eng | 30.3 | 0.570 | | newstest2012-ceseng.ces.eng | 23.5 | 0.516 | | newstest2012-deueng.deu.eng | 24.9 | 0.529 | | newstest2012-fraeng.fra.eng | 30.0 | 0.568 | | newstest2012-ruseng.rus.eng | 29.9 | 0.565 | | newstest2012-spaeng.spa.eng | 33.3 | 0.593 | | newstest2013-ceseng.ces.eng | 25.6 | 0.531 | | newstest2013-deueng.deu.eng | 27.7 | 0.545 | | newstest2013-fraeng.fra.eng | 30.0 | 0.561 | | newstest2013-ruseng.rus.eng | 24.4 | 0.514 | | newstest2013-spaeng.spa.eng | 30.8 | 0.577 | | newstest2014-csen-ceseng.ces.eng | 27.7 | 0.558 | | newstest2014-deen-deueng.deu.eng | 27.7 | 0.545 | | newstest2014-fren-fraeng.fra.eng | 32.2 | 0.592 | | newstest2014-hien-hineng.hin.eng | 16.7 | 0.450 | | newstest2014-ruen-ruseng.rus.eng | 27.2 | 0.552 | | newstest2015-encs-ceseng.ces.eng | 25.4 | 0.518 | | newstest2015-ende-deueng.deu.eng | 28.8 | 0.552 | | newstest2015-enru-ruseng.rus.eng | 25.6 | 0.527 | | newstest2016-encs-ceseng.ces.eng | 27.0 | 0.540 | | newstest2016-ende-deueng.deu.eng | 33.5 | 0.592 | | newstest2016-enro-roneng.ron.eng | 32.8 | 0.591 | | newstest2016-enru-ruseng.rus.eng | 24.8 | 0.523 | | newstest2017-encs-ceseng.ces.eng | 23.7 | 0.510 | | newstest2017-ende-deueng.deu.eng | 29.3 | 0.556 | | newstest2017-enlv-laveng.lav.eng | 18.9 | 0.486 | | newstest2017-enru-ruseng.rus.eng | 28.0 | 0.546 | | newstest2018-encs-ceseng.ces.eng | 24.9 | 0.521 | | newstest2018-ende-deueng.deu.eng | 36.0 | 0.604 | | newstest2018-enru-ruseng.rus.eng | 23.8 | 0.517 | | newstest2019-deen-deueng.deu.eng | 31.5 | 0.570 | | newstest2019-guen-gujeng.guj.eng | 12.1 | 0.377 | | newstest2019-lten-liteng.lit.eng | 26.6 | 0.555 | | newstest2019-ruen-ruseng.rus.eng | 27.5 | 0.541 | | Tatoeba-test.afr-eng.afr.eng | 59.0 | 0.724 | | Tatoeba-test.ang-eng.ang.eng | 9.9 | 0.254 | | Tatoeba-test.arg-eng.arg.eng | 41.6 | 0.487 | | Tatoeba-test.asm-eng.asm.eng | 22.8 | 0.392 | | Tatoeba-test.ast-eng.ast.eng | 36.1 | 0.521 | | Tatoeba-test.awa-eng.awa.eng | 11.6 | 0.280 | | Tatoeba-test.bel-eng.bel.eng | 42.2 | 0.597 | | Tatoeba-test.ben-eng.ben.eng | 45.8 | 0.598 | | Tatoeba-test.bho-eng.bho.eng | 34.4 | 0.518 | | Tatoeba-test.bre-eng.bre.eng | 24.4 | 0.405 | | Tatoeba-test.bul-eng.bul.eng | 50.8 | 0.660 | | Tatoeba-test.cat-eng.cat.eng | 51.2 | 0.677 | | Tatoeba-test.ces-eng.ces.eng | 47.6 | 0.641 | | Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.214 | | Tatoeba-test.cos-eng.cos.eng | 61.0 | 0.675 | | Tatoeba-test.csb-eng.csb.eng | 22.5 | 0.394 | | Tatoeba-test.cym-eng.cym.eng | 34.7 | 0.522 | | Tatoeba-test.dan-eng.dan.eng | 56.2 | 0.708 | | Tatoeba-test.deu-eng.deu.eng | 44.9 | 0.625 | | Tatoeba-test.dsb-eng.dsb.eng | 21.0 | 0.383 | | Tatoeba-test.egl-eng.egl.eng | 6.9 | 0.221 | | Tatoeba-test.ell-eng.ell.eng | 62.1 | 0.741 | | Tatoeba-test.enm-eng.enm.eng | 22.6 | 0.466 | | Tatoeba-test.ext-eng.ext.eng | 33.2 | 0.496 | | Tatoeba-test.fao-eng.fao.eng | 28.1 | 0.460 | | Tatoeba-test.fas-eng.fas.eng | 9.6 | 0.306 | | Tatoeba-test.fra-eng.fra.eng | 50.3 | 0.661 | | Tatoeba-test.frm-eng.frm.eng | 30.0 | 0.457 | | Tatoeba-test.frr-eng.frr.eng | 15.2 | 0.301 | | Tatoeba-test.fry-eng.fry.eng | 34.4 | 0.525 | | Tatoeba-test.gcf-eng.gcf.eng | 18.4 | 0.317 | | Tatoeba-test.gla-eng.gla.eng | 24.1 | 0.400 | | Tatoeba-test.gle-eng.gle.eng | 52.2 | 0.671 | | Tatoeba-test.glg-eng.glg.eng | 50.5 | 0.669 | | Tatoeba-test.glv-eng.glv.eng | 5.7 | 0.189 | | Tatoeba-test.gos-eng.gos.eng | 19.2 | 0.378 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.022 | | Tatoeba-test.grc-eng.grc.eng | 0.9 | 0.095 | | Tatoeba-test.gsw-eng.gsw.eng | 23.9 | 0.390 | | Tatoeba-test.guj-eng.guj.eng | 28.0 | 0.428 | | Tatoeba-test.hat-eng.hat.eng | 44.2 | 0.567 | | Tatoeba-test.hbs-eng.hbs.eng | 51.6 | 0.666 | | Tatoeba-test.hif-eng.hif.eng | 22.3 | 0.451 | | Tatoeba-test.hin-eng.hin.eng | 41.7 | 0.585 | | Tatoeba-test.hsb-eng.hsb.eng | 46.4 | 0.590 | | Tatoeba-test.hye-eng.hye.eng | 40.4 | 0.564 | | Tatoeba-test.isl-eng.isl.eng | 43.8 | 0.605 | | Tatoeba-test.ita-eng.ita.eng | 60.7 | 0.735 | | Tatoeba-test.jdt-eng.jdt.eng | 5.5 | 0.091 | | Tatoeba-test.kok-eng.kok.eng | 7.8 | 0.205 | | Tatoeba-test.ksh-eng.ksh.eng | 15.8 | 0.284 | | Tatoeba-test.kur-eng.kur.eng | 11.6 | 0.232 | | Tatoeba-test.lad-eng.lad.eng | 30.7 | 0.484 | | Tatoeba-test.lah-eng.lah.eng | 11.0 | 0.286 | | Tatoeba-test.lat-eng.lat.eng | 24.4 | 0.432 | | Tatoeba-test.lav-eng.lav.eng | 47.2 | 0.646 | | Tatoeba-test.lij-eng.lij.eng | 9.0 | 0.287 | | Tatoeba-test.lit-eng.lit.eng | 51.7 | 0.670 | | Tatoeba-test.lld-eng.lld.eng | 22.4 | 0.369 | | Tatoeba-test.lmo-eng.lmo.eng | 26.1 | 0.381 | | Tatoeba-test.ltz-eng.ltz.eng | 39.8 | 0.536 | | Tatoeba-test.mai-eng.mai.eng | 72.3 | 0.758 | | Tatoeba-test.mar-eng.mar.eng | 32.0 | 0.554 | | Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.822 | | Tatoeba-test.mkd-eng.mkd.eng | 49.5 | 0.638 | | Tatoeba-test.msa-eng.msa.eng | 38.6 | 0.566 | | Tatoeba-test.multi.eng | 45.6 | 0.615 | | Tatoeba-test.mwl-eng.mwl.eng | 40.4 | 0.767 | | Tatoeba-test.nds-eng.nds.eng | 35.5 | 0.538 | | Tatoeba-test.nep-eng.nep.eng | 4.9 | 0.209 | | Tatoeba-test.nld-eng.nld.eng | 54.2 | 0.694 | | Tatoeba-test.non-eng.non.eng | 39.3 | 0.573 | | Tatoeba-test.nor-eng.nor.eng | 50.9 | 0.663 | | Tatoeba-test.oci-eng.oci.eng | 19.6 | 0.386 | | Tatoeba-test.ori-eng.ori.eng | 16.2 | 0.364 | | Tatoeba-test.orv-eng.orv.eng | 13.6 | 0.288 | | Tatoeba-test.oss-eng.oss.eng | 9.4 | 0.301 | | Tatoeba-test.pan-eng.pan.eng | 17.1 | 0.389 | | Tatoeba-test.pap-eng.pap.eng | 57.0 | 0.680 | | Tatoeba-test.pdc-eng.pdc.eng | 41.6 | 0.526 | | Tatoeba-test.pms-eng.pms.eng | 13.7 | 0.333 | | Tatoeba-test.pol-eng.pol.eng | 46.5 | 0.632 | | Tatoeba-test.por-eng.por.eng | 56.4 | 0.710 | | Tatoeba-test.prg-eng.prg.eng | 2.3 | 0.193 | | Tatoeba-test.pus-eng.pus.eng | 3.2 | 0.194 | | Tatoeba-test.roh-eng.roh.eng | 17.5 | 0.420 | | Tatoeba-test.rom-eng.rom.eng | 5.0 | 0.237 | | Tatoeba-test.ron-eng.ron.eng | 51.4 | 0.670 | | Tatoeba-test.rue-eng.rue.eng | 26.0 | 0.447 | | Tatoeba-test.rus-eng.rus.eng | 47.8 | 0.634 | | Tatoeba-test.san-eng.san.eng | 4.0 | 0.195 | | Tatoeba-test.scn-eng.scn.eng | 45.1 | 0.440 | | Tatoeba-test.sco-eng.sco.eng | 41.9 | 0.582 | | Tatoeba-test.sgs-eng.sgs.eng | 38.7 | 0.498 | | Tatoeba-test.sin-eng.sin.eng | 29.7 | 0.499 | | Tatoeba-test.slv-eng.slv.eng | 38.2 | 0.564 | | Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.342 | | Tatoeba-test.spa-eng.spa.eng | 53.2 | 0.687 | | Tatoeba-test.sqi-eng.sqi.eng | 51.9 | 0.679 | | Tatoeba-test.stq-eng.stq.eng | 9.0 | 0.391 | | Tatoeba-test.swe-eng.swe.eng | 57.4 | 0.705 | | Tatoeba-test.swg-eng.swg.eng | 18.0 | 0.338 | | Tatoeba-test.tgk-eng.tgk.eng | 24.3 | 0.413 | | Tatoeba-test.tly-eng.tly.eng | 1.1 | 0.094 | | Tatoeba-test.ukr-eng.ukr.eng | 48.0 | 0.639 | | Tatoeba-test.urd-eng.urd.eng | 27.2 | 0.471 | | Tatoeba-test.vec-eng.vec.eng | 28.0 | 0.398 | | Tatoeba-test.wln-eng.wln.eng | 17.5 | 0.320 | | Tatoeba-test.yid-eng.yid.eng | 26.9 | 0.457 | | Tatoeba-test.zza-eng.zza.eng | 1.7 | 0.131 | ### System Info: - hf_name: ine-eng - source_languages: ine - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt - src_alpha3: ine - tgt_alpha3: eng - short_pair: ine-en - chrF2_score: 0.615 - bleu: 45.6 - brevity_penalty: 0.997 - ref_len: 71872.0 - src_name: Indo-European languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: ine - tgt_alpha2: en - prefer_old: False - long_pair: ine-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
keremberke/yolov8n-pcb-defect-segmentation
keremberke
"2023-02-22T13:02:17Z"
2,900
3
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-segmentation", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pcb-defect-segmentation", "model-index", "region:us" ]
image-segmentation
"2023-01-28T06:32:15Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-segmentation - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pcb-defect-segmentation model-index: - name: keremberke/yolov8n-pcb-defect-segmentation results: - task: type: image-segmentation dataset: type: keremberke/pcb-defect-segmentation name: pcb-defect-segmentation split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.51186 # min: 0.0 - max: 1.0 name: [email protected](box) - type: precision # since [email protected] is not available on hf.co/metrics value: 0.51667 # min: 0.0 - max: 1.0 name: [email protected](mask) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-pcb-defect-segmentation" src="https://huggingface.co/keremberke/yolov8n-pcb-defect-segmentation/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Dry_joint', 'Incorrect_installation', 'PCB_damage', 'Short_circuit'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-pcb-defect-segmentation') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) print(results[0].masks) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
timm/convnextv2_nano.fcmae_ft_in22k_in1k_384
timm
"2024-02-10T23:29:33Z"
2,899
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2301.00808", "license:cc-by-nc-4.0", "region:us" ]
image-classification
"2023-01-05T01:55:50Z"
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-1k --- # Model card for convnextv2_nano.fcmae_ft_in22k_in1k_384 A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-22k and then ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 15.6 - GMACs: 7.2 - Activations (M): 24.6 - Image size: 384 x 384 - **Papers:** - ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808 - **Original:** https://github.com/facebookresearch/ConvNeXt-V2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnextv2_nano.fcmae_ft_in22k_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_nano.fcmae_ft_in22k_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 80, 96, 96]) # torch.Size([1, 160, 48, 48]) # torch.Size([1, 320, 24, 24]) # torch.Size([1, 640, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_nano.fcmae_ft_in22k_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 640, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{Woo2023ConvNeXtV2, title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders}, author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie}, year={2023}, journal={arXiv preprint arXiv:2301.00808}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
unsloth/zephyr-sft-bnb-4bit
unsloth
"2024-03-22T15:15:34Z"
2,899
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "mistral-7b", "sft", "zephyr", "bnb", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2023-12-31T06:51:02Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - mistral - mistral-7b - sft - zephyr - bnb --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! Follows https://github.com/huggingface/alignment-handbook/issues/45#issuecomment-1845598205 From `HuggingFaceH4/mistral-7b-sft-beta` [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## โœจ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [โ–ถ๏ธ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF
mradermacher
"2024-06-06T21:49:50Z"
2,899
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/SOLAR-10.7B-NahIdWin", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T09:32:10Z"
--- base_model: Sao10K/SOLAR-10.7B-NahIdWin language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Sao10K/SOLAR-10.7B-NahIdWin <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
microsoft/xtremedistil-l6-h256-uncased
microsoft
"2021-08-05T17:49:53Z"
2,898
29
transformers
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "text-classification", "en", "arxiv:2106.04563", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- # XtremeDistilTransformers for Distilling Massive Neural Networks XtremeDistilTransformers is a distilled task-agnostic transformer model that leverages task transfer for learning a small universal model that can be applied to arbitrary tasks and languages as outlined in the paper [XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation](https://arxiv.org/abs/2106.04563). We leverage task transfer combined with multi-task distillation techniques from the papers [XtremeDistil: Multi-stage Distillation for Massive Multilingual Models](https://www.aclweb.org/anthology/2020.acl-main.202.pdf) and [MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://proceedings.neurips.cc/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) with the following [Github code](https://github.com/microsoft/xtreme-distil-transformers). This l6-h384 checkpoint with **6** layers, **384** hidden size, **12** attention heads corresponds to **22 million** parameters with **5.3x** speedup over BERT-base. Other available checkpoints: [xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) and [xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) The following table shows the results on GLUE dev set and SQuAD-v2. | Models | #Params | Speedup | MNLI | QNLI | QQP | RTE | SST | MRPC | SQUAD2 | Avg | |----------------|--------|---------|------|------|------|------|------|------|--------|-------| | BERT | 109 | 1x | 84.5 | 91.7 | 91.3 | 68.6 | 93.2 | 87.3 | 76.8 | 84.8 | | DistilBERT | 66 | 2x | 82.2 | 89.2 | 88.5 | 59.9 | 91.3 | 87.5 | 70.7 | 81.3 | | TinyBERT | 66 | 2x | 83.5 | 90.5 | 90.6 | 72.2 | 91.6 | 88.4 | 73.1 | 84.3 | | MiniLM | 66 | 2x | 84.0 | 91.0 | 91.0 | 71.5 | 92.0 | 88.4 | 76.4 | 84.9 | | MiniLM | 22 | 5.3x | 82.8 | 90.3 | 90.6 | 68.9 | 91.3 | 86.6 | 72.9 | 83.3 | | XtremeDistil-l6-h256 | 13 | 8.7x | 83.9 | 89.5 | 90.6 | 80.1 | 91.2 | 90.0 | 74.1 | 85.6 | | XtremeDistil-l6-h384 | 22 | 5.3x | 85.4 | 90.3 | 91.0 | 80.9 | 92.3 | 90.0 | 76.6 | 86.6 | | XtremeDistil-l12-h384 | 33 | 2.7x | 87.2 | 91.9 | 91.3 | 85.6 | 93.1 | 90.4 | 80.2 | 88.5 | Tested with `tensorflow 2.3.1, transformers 4.1.1, torch 1.6.0` If you use this checkpoint in your work, please cite: ``` latex @misc{mukherjee2021xtremedistiltransformers, title={XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation}, author={Subhabrata Mukherjee and Ahmed Hassan Awadallah and Jianfeng Gao}, year={2021}, eprint={2106.04563}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF
mradermacher
"2024-06-13T00:21:17Z"
2,898
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama", "not-for-all-audiences", "en", "base_model:v000000/L3-8B-Poppy-Moonfall-OG", "endpoints_compatible", "region:us" ]
null
"2024-06-12T22:07:13Z"
--- base_model: v000000/L3-8B-Poppy-Moonfall-OG language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - llama - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-OG <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF/resolve/main/L3-8B-Poppy-Moonfall-OG.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bartowski/Starling-LM-7B-beta-GGUF
bartowski
"2024-03-20T22:23:14Z"
2,897
22
transformers
[ "transformers", "gguf", "reward model", "RLHF", "RLAIF", "text-generation", "en", "dataset:berkeley-nest/Nectar", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-20T22:09:49Z"
--- license: apache-2.0 datasets: - berkeley-nest/Nectar language: - en library_name: transformers tags: - reward model - RLHF - RLAIF quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of Starling-LM-7B-beta Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization. Original model: https://huggingface.co/Nexusflow/Starling-LM-7B-beta Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Starling-LM-7B-beta-Q8_0.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [Starling-LM-7B-beta-Q6_K.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [Starling-LM-7B-beta-Q5_K_M.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. | | [Starling-LM-7B-beta-Q5_K_S.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. | | [Starling-LM-7B-beta-Q5_0.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. | | [Starling-LM-7B-beta-Q4_K_M.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. | | [Starling-LM-7B-beta-Q4_K_S.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. | | [Starling-LM-7B-beta-IQ4_NL.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, | | [Starling-LM-7B-beta-IQ4_XS.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. | | [Starling-LM-7B-beta-Q4_0.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. | | [Starling-LM-7B-beta-IQ3_M.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. | | [Starling-LM-7B-beta-IQ3_S.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [Starling-LM-7B-beta-Q3_K_L.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [Starling-LM-7B-beta-Q3_K_M.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [Starling-LM-7B-beta-Q3_K_S.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [Starling-LM-7B-beta-Q2_K.gguf](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF/blob/main/Starling-LM-7B-beta-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
hallisky/sarcasm-classifier-gpt4-data
hallisky
"2024-05-17T00:11:04Z"
2,897
1
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-16T19:41:58Z"
--- license: apache-2.0 widget: - text: "Oh really, what a great idea! Let's just ignore all the facts and trot right ahead!" example_title: "Sarcastic Dialogue" output: - label: sarcasm_more score: 1.0 - label: sarcasm_less score: 0.0 - text: "What a great idea - let's continue!" example_title: "Sincere Dialogue" ---
neuralmagic/Meta-Llama-3-8B-Instruct-FP8-KV
neuralmagic
"2024-06-19T16:10:20Z"
2,897
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "fp8", "vllm", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-20T18:44:16Z"
--- tags: - fp8 - vllm --- # Meta-Llama-3-8B-Instruct-FP8-KV ## Model Overview Meta-Llama-3-8B-Instruct quantized to FP8 weights and activations using per-tensor quantization, ready for inference with vLLM >= 0.5.0. This model checkpoint also includes per-tensor scales for FP8 quantized KV Cache, accessed through the `--kv-cache-dtype fp8` argument in vLLM. ```python from vllm import LLM model = LLM(model="neuralmagic/Meta-Llama-3-8B-Instruct-FP8-KV", kv_cache_dtype="fp8") result = model.generate("Hello, my name is") ``` ## Usage and Creation Produced using [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py). ```python from datasets import load_dataset from transformers import AutoTokenizer from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig pretrained_model_dir = "meta-llama/Meta-Llama-3-8B-Instruct" quantized_model_dir = "Meta-Llama-3-8B-Instruct-FP8-KV" tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True) tokenizer.pad_token = tokenizer.eos_token ds = load_dataset("mgoin/ultrachat_2k", split="train_sft") examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds] examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda") quantize_config = BaseQuantizeConfig( quant_method="fp8", activation_scheme="static", ignore_patterns=["re:.*lm_head"], kv_cache_quant_targets=("k_proj", "v_proj"), ) model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config) model.quantize(examples) model.save_quantized(quantized_model_dir) ``` ## Evaluation ### Open LLM Leaderboard evaluation scores | | Meta-Llama-3-8B-Instruct | Meta-Llama-3-8B-Instruct-FP8 | Meta-Llama-3-8B-Instruct-FP8-KV<br>(this model) | | :------------------: | :----------------------: | :--------------------------: | :---------------------------------------------: | | gsm8k<br>5-shot | 75.44 | 74.37 | 74.98 |
PygmalionAI/pygmalion-2-13b
PygmalionAI
"2023-09-15T20:29:04Z"
2,896
72
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text generation", "instruct", "en", "dataset:PygmalionAI/PIPPA", "dataset:Open-Orca/OpenOrca", "dataset:Norquinal/claude_multiround_chat_30k", "dataset:jondurbin/airoboros-gpt4-1.4.1", "dataset:databricks/databricks-dolly-15k", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-04T22:05:31Z"
--- language: - en thumbnail: null tags: - text generation - instruct pipeline_tag: text-generation inference: false license: llama2 datasets: - PygmalionAI/PIPPA - Open-Orca/OpenOrca - Norquinal/claude_multiround_chat_30k - jondurbin/airoboros-gpt4-1.4.1 - databricks/databricks-dolly-15k --- <h1 style="text-align: center">Pygmalion-2 13B</h1> <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2> ## Model Details The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 13B (formerly known as Metharme) is based on [Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI. The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion. This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached. This model is freely available for both commercial and non-commercial use, as per the Llama-2 license. ## Prompting The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ### Prompting example The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. ``` ## Dataset The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction datasets, and datasets acquired from various RP forums. ## Limitations and biases The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. ## Acknowledgements We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
MaziyarPanahi/mergekit-slerp-mpovshq-GGUF
MaziyarPanahi
"2024-06-17T12:36:56Z"
2,896
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:WizardLM/WizardMath-7B-V1.1", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-mpovshq" ]
text-generation
"2024-06-17T12:09:28Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:WizardLM/WizardMath-7B-V1.1 - base_model:NousResearch/Hermes-2-Pro-Mistral-7B - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-mpovshq-GGUF base_model: mergekit-community/mergekit-slerp-mpovshq inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-mpovshq-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-mpovshq-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-mpovshq](https://huggingface.co/mergekit-community/mergekit-slerp-mpovshq) ## Description [MaziyarPanahi/mergekit-slerp-mpovshq-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-mpovshq-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-mpovshq](https://huggingface.co/mergekit-community/mergekit-slerp-mpovshq). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
codesage/codesage-small
codesage
"2024-06-26T03:37:57Z"
2,895
4
transformers
[ "transformers", "pytorch", "custom_code", "code", "dataset:bigcode/the-stack-dedup", "arxiv:2402.01935", "arxiv:2305.06161", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-02-06T18:35:41Z"
--- license: apache-2.0 datasets: - bigcode/the-stack-dedup library_name: transformers language: - code --- ## CodeSage-Small ### Model description CodeSage is a new family of open code embedding models with an encoder architecture that support a wide range of source code understanding tasks. It is introduced in the paper: [Code Representation Learning At Scale by Dejiao Zhang*, Wasi Uddin Ahmad*, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang](https://arxiv.org/abs/2402.01935) (* indicates equal contribution). ### Pretraining data This checkpoint is trained on the Stack data (https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (9 in total) are as follows: c, c-sharp, go, java, javascript, typescript, php, python, ruby. ### Training procedure This checkpoint is first trained on code data via masked language modeling (MLM) and then on bimodal text-code pair data. Please refer to the paper for more details. ### How to use This checkpoint consists of an encoder (130M model), which can be used to extract code embeddings of 1024 dimension. It can be easily loaded using the AutoModel functionality and employs the Starcoder tokenizer (https://arxiv.org/pdf/2305.06161.pdf). ``` from transformers import AutoModel, AutoTokenizer checkpoint = "codesage/codesage-small" device = "cuda" # for GPU usage or "cpu" for CPU usage # Note: CodeSage requires adding eos token at the end of # each tokenized sequence to ensure good performance tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True, add_eos_token=True) model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device) inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device) embedding = model(inputs)[0] print(f'Dimension of the embedding: {embedding[0].size()}') # Dimension of the embedding: torch.Size([14, 1024]) ``` ### BibTeX entry and citation info ``` @inproceedings{ zhang2024code, title={{CODE} {REPRESENTATION} {LEARNING} {AT} {SCALE}}, author={Dejiao Zhang and Wasi Uddin Ahmad and Ming Tan and Hantian Ding and Ramesh Nallapati and Dan Roth and Xiaofei Ma and Bing Xiang}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=vfzRRjumpX} } ```
predibase/Meta-Llama-3-70B-Instruct-dequantized
predibase
"2024-05-03T19:29:36Z"
2,895
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-03T19:10:06Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - llama base_model: meta-llama/Meta-Llama-3-70B-Instruct ---
TencentARC/t2iadapter_depth_sd15v2
TencentARC
"2023-07-31T11:10:44Z"
2,892
2
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
"2023-07-14T19:01:05Z"
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Depth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from controlnet_aux import MidasDetector from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch midas = MidasDetector.from_pretrained("lllyasviel/Annotators") image = Image.open('./images/depth_input.png') image = midas(image) image.save('./images/depth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd15v2", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) depth_out = pipe(prompt="storm trooper giving a speech", image=image, generator=generator).images[0] depth_out.save('./images/depth_output.png') ``` ![depth_input](./images/depth_input.png) ![depth](./images/depth.png) ![depth_output](./images/depth_output.png)
TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF
TheBloke
"2023-09-27T12:47:09Z"
2,892
49
transformers
[ "transformers", "gguf", "llama", "facebook", "meta", "pytorch", "llama-2", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2307.09288", "base_model:uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-09-02T09:17:16Z"
--- language: - en license: llama2 library_name: transformers tags: - facebook - meta - pytorch - llama - llama-2 datasets: - garage-bAInd/Open-Platypus model_name: Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B base_model: uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b inference: false model_creator: Jiangwen Su model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B - GGUF - Model creator: [Jiangwen Su](https://huggingface.co/uukuguy) - Original model: [Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Jiangwen Su's Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF) * [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF/blob/main/speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF and below it, a specific filename to download, such as: speechless-llama2-hermes-orca-platypus-wizardlm-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF speechless-llama2-hermes-orca-platypus-wizardlm-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF speechless-llama2-hermes-orca-platypus-wizardlm-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m speechless-llama2-hermes-orca-platypus-wizardlm-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Speechless-Llama2-Hermes-Orca-Platypus-WizardLM-13B-GGUF", model_file="speechless-llama2-hermes-orca-platypus-wizardlm-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jiangwen Su's Speechess Lllama2 Hermes Orca-Platypus WizardLM 13B <p><h1> speechless-llama2-hermes-orca-platypus-wizardlm-13b </h1></p> speechless-llama2-hermes-orca-platypus-wizardlm-13b is a merge of NousResearch/Nous-Hermes-Llama2-13b, Open-Orca/OpenOrca-Platypus2-13B and WizardLM/WizardLM-13B-V1.2. | Metric | Value | | --- | --- | | ARC | | | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | | # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes โ€” 7B, 13B, and 70B โ€” as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโ€™s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โ€™s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software โ€œbug,โ€ or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| <!-- original-model-card end -->
microsoft/cvt-21-384-22k
microsoft
"2022-10-11T13:47:26Z"
2,889
3
transformers
[ "transformers", "pytorch", "tf", "cvt", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2103.15808", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-04-04T11:33:08Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Convolutional Vision Transformer (CvT) CvT-21 model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT). Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, CvtForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-21-384-22k') model = CvtForImageClassification.from_pretrained('microsoft/cvt-21-384-22k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
timm/vit_base_patch8_224.dino
timm
"2024-02-09T18:13:19Z"
2,889
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "arxiv:2104.14294", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2022-12-22T07:23:57Z"
--- license: apache-2.0 library_name: timm tags: - image-feature-extraction - timm --- # Model card for vit_base_patch8_224.dino A Vision Transformer (ViT) image feature model. Trained with Self-Supervised DINO method. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 85.8 - GMACs: 66.9 - Activations (M): 65.7 - Image size: 224 x 224 - **Papers:** - Emerging Properties in Self-Supervised Vision Transformers: https://arxiv.org/abs/2104.14294 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Pretrain Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/dino ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch8_224.dino', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch8_224.dino', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 785, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{caron2021emerging, title={Emerging properties in self-supervised vision transformers}, author={Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J{'e}gou, Herv{'e} and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={9650--9660}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Yntec/Gacha
Yntec
"2023-12-07T21:01:37Z"
2,889
3
diffusers
[ "diffusers", "safetensors", "Game Art", "General purpose", "Logos", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-07T20:24:29Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Game Art - General purpose - Logos - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image inference: true --- # Gacha A model with the gacha mechanic implemented to get you addicted to generating images. Samples and prompts: ![Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/LIe7UoawQcczhSuFEfTnC.png) (Click for larger) Top left: a lemon themed hamburger, high quality Top right: a basket full of cherries, grass by wess anderson, high quality Bottom left: A mushroom themed cake, high quality pie Bottom right: highly detailed chibi koala in kitchen, hammer chef, fish, dungeons and dragons
mradermacher/1min-v2-luxia-8b-GGUF
mradermacher
"2024-06-05T15:28:58Z"
2,888
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:esunn/1min-v2-luxia-8b", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-05T14:59:51Z"
--- base_model: esunn/1min-v2-luxia-8b language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/esunn/1min-v2-luxia-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.IQ3_XS.gguf) | IQ3_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.IQ3_S.gguf) | IQ3_S | 3.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.IQ3_M.gguf) | IQ3_M | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q5_K_M.gguf) | Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/1min-v2-luxia-8b-GGUF/resolve/main/1min-v2-luxia-8b.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
paust/pko-t5-small
paust
"2022-09-14T04:41:53Z"
2,887
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "ko", "arxiv:2105.09680", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-05-16T10:26:56Z"
--- language: ko license: cc-by-4.0 --- # pko-t5-small [Source Code](https://github.com/paust-team/pko-t5) pko-t5 ๋Š” ํ•œ๊ตญ์–ด ์ „์šฉ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šตํ•œ [t5 v1.1 ๋ชจ๋ธ](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)์ž…๋‹ˆ๋‹ค. ํ•œ๊ตญ์–ด๋ฅผ tokenize ํ•˜๊ธฐ ์œ„ํ•ด์„œ sentencepiece ๋Œ€์‹  OOV ๊ฐ€ ์—†๋Š” BBPE ๋ฅผ ์‚ฌ์šฉํ–ˆ์œผ๋ฉฐ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ (๋‚˜๋ฌด์œ„ํ‚ค, ์œ„ํ‚คํ”ผ๋””์•„, ๋ชจ๋‘์˜๋ง๋ญ‰์น˜ ๋“ฑ..) ๋ฅผ T5 ์˜ span corruption task ๋ฅผ ์‚ฌ์šฉํ•ด์„œ unsupervised learning ๋งŒ ์ ์šฉํ•˜์—ฌ ํ•™์Šต์„ ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. pko-t5 ๋ฅผ ์‚ฌ์šฉํ•˜์‹ค ๋•Œ๋Š” ๋Œ€์ƒ task ์— ํŒŒ์ธํŠœ๋‹ํ•˜์—ฌ ์‚ฌ์šฉํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ## Usage transformers ์˜ API ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. tokenizer ๋ฅผ ์‚ฌ์šฉํ• ๋•Œ๋Š” `T5Tokenizer` ๊ฐ€ ์•„๋‹ˆ๋ผ `T5TokenizerFast` ๋ฅผ ์‚ฌ์šฉํ•ด์ฃผ์‹ญ์‹œ์˜ค. model ์€ T5ForConditionalGeneration ๋ฅผ ๊ทธ๋Œ€๋กœ ํ™œ์šฉํ•˜์‹œ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ### Example ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-small') model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-small') input_ids = tokenizer(["qa question: ๋‹น์‹ ์˜ ์ด๋ฆ„์€ ๋ฌด์—‡์ธ๊ฐ€์š”?"]).input_ids labels = tokenizer(["T5 ์ž…๋‹ˆ๋‹ค."]).input_ids outputs = model(input_ids, labels) print(f"loss={outputs.loss} logits={outputs.logits}") ``` ## Klue ํ‰๊ฐ€ (dev) | | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) | |-----|------------------------------------------------------------------|-----------------|-------------------|-----------|-----------------------|---------------|-----------|-------------| | | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | **75.26/-** | | FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 | | FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 | | FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | **75.17** | **97.60** | 68.01/71.44 | | MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 | | MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 | | MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 | - FT: ์‹ฑ๊ธ€ํƒœ์Šคํฌ ํŒŒ์ธํŠœ๋‹ / MT: ๋ฉ€ํ‹ฐํƒœ์Šคํฌ ํŒŒ์ธํŠœ๋‹ - [Baseline](https://arxiv.org/abs/2105.09680): KLUE ๋…ผ๋ฌธ์—์„œ ์†Œ๊ฐœ๋œ dev set ์— ๋Œ€ํ•œ SOTA ์ ์ˆ˜ ## License [PAUST](https://paust.io)์—์„œ ๋งŒ๋“  pko-t5๋Š” [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) ํ•˜์— ๊ณต๊ฐœ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
Yntec/ResidentCNZCartoon3D
Yntec
"2024-01-07T20:17:34Z"
2,887
1
diffusers
[ "diffusers", "safetensors", "Style", "Cartoon", "Disney", "Pixar", "Dreamworks", "residentchiefnz", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-07T19:25:13Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Style - Cartoon - Disney - Pixar - Dreamworks - residentchiefnz - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # RCNZ Cartoon 3D v2 Original page: https://civitai.com/models/66347?modelVersionId=82547 Samples and prompts (in no particular order): ![Cartoon 3D Free AI image generator samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/A0V4NM4RSEJzvFUi96UqM.png) (Click for larger) highquality, masterpiece, 1girl, Chi-Chi, close up, arms up, pink helmet, black hair, black eyes, blush, bikini armor, enjoying, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar ultra high res, a cute girl, detailed, cg, zelda princess, AS Younger, cute pose cute sister playing chess, DETAILED CHIBI, Magazine ad, iconic, 1940, from the movie, Cartoon, sharp focus. acrylic art on canvas by Disney and Clay Mann and ROSSDRAWS. Highly detailed, High Quality, Masterpiece, beautiful, cute girl as toon plumber, blue cap, red overalls, Peach
Menouar/saqr-7b-merged
Menouar
"2024-02-16T09:20:31Z"
2,887
1
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "saqr-7b-instrcut", "Pytorch", "conversational", "custom_code", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:openbmb/UltraFeedback", "dataset:gsm8k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-16T09:10:53Z"
--- library_name: transformers tags: - saqr-7b-instrcut - Pytorch license: apache-2.0 datasets: - HuggingFaceH4/ultrachat_200k - openbmb/UltraFeedback - gsm8k language: - en pipeline_tag: text-generation --- # saqr-7b-merged This model is a merged version of [**saqr-7b-instruct**](https://huggingface.co/Menouar/saqr-7b-instruct) with LoRA Adapters. <img src="https://huggingface.co/Menouar/saqr-7b-instruct/resolve/main/saqr.jpg" alt="Saqr Logo" width="800" style="margin-left:auto; margin-right:auto; display:block;"/>
TIGER-Lab/Mantis-8B-Idefics2
TIGER-Lab
"2024-05-23T03:50:29Z"
2,887
8
transformers
[ "transformers", "safetensors", "idefics2", "pretraining", "multimodal", "lmm", "vlm", "llava", "siglip", "llama3", "mantis", "en", "dataset:TIGER-Lab/Mantis-Instruct", "arxiv:2405.01483", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-05-23T03:02:47Z"
--- license: apache-2.0 base_model: HuggingFaceM4/idefics2-8b tags: - multimodal - lmm - vlm - llava - siglip - llama3 - mantis model-index: - name: mantis-8b-idefics2_8192 results: [] datasets: - TIGER-Lab/Mantis-Instruct language: - en --- # ๐Ÿ”ฅ Mantis [Paper](https://arxiv.org/abs/2405.01483) | [Website](https://tiger-ai-lab.github.io/Mantis/) | [Github](https://github.com/TIGER-AI-Lab/Mantis) | [Models](https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4) | [Demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) ![Mantis](https://tiger-ai-lab.github.io/Mantis/images/radar_chart.png) **Excited to announce Mantis-Idefics2, with enhanced ability in multi-image scenarios!** It's fine-tuned on [Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct) from [Idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) ## Summary - Mantis-Idefics2 is an LMM with **interleaved text and image as inputs**, trained on Mantis-Instruct under academic-level resources (i.e. 36 hours on 16xA100-40G). - Mantis is trained to have multi-image skills including co-reference, reasoning, comparing, temporal understanding. - Mantis reaches the state-of-the-art performance on five multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and also maintain a strong single-image performance on par with CogVLM and Emu2. ## Multi-Image Performance | Models | Size | Format | NLVR2 | Q-Bench | Mantis-Eval | BLINK | MVBench | Avg | |--------------------|:----:|:--------:|:-----:|:-------:|:-----------:|:-----:|:-------:|:----:| | GPT-4V | - | sequence | 88.80 | 76.52 | 62.67 | 51.14 | 43.50 | 64.5 | | Open Source Models | | | | | | | | | | Random | - | - | 48.93 | 40.20 | 23.04 | 38.09 | 27.30 | 35.5 | | Kosmos2 | 1.6B | merge | 49.00 | 35.10 | 30.41 | 37.50 | 21.62 | 34.7 | | LLaVA-v1.5 | 7B | merge | 53.88 | 49.32 | 31.34 | 37.13 | 36.00 | 41.5 | | LLava-V1.6 | 7B | merge | 58.88 | 54.80 | 45.62 | 39.55 | 40.90 | 48.0 | | Qwen-VL-Chat | 7B | merge | 58.72 | 45.90 | 39.17 | 31.17 | 42.15 | 43.4 | | Fuyu | 8B | merge | 51.10 | 49.15 | 27.19 | 36.59 | 30.20 | 38.8 | | BLIP-2 | 13B | merge | 59.42 | 51.20 | 49.77 | 39.45 | 31.40 | 46.2 | | InstructBLIP | 13B | merge | 60.26 | 44.30 | 45.62 | 42.24 | 32.50 | 45.0 | | CogVLM | 17B | merge | 58.58 | 53.20 | 45.16 | 41.54 | 37.30 | 47.2 | | OpenFlamingo | 9B | sequence | 36.41 | 19.60 | 12.44 | 39.18 | 7.90 | 23.1 | | Otter-Image | 9B | sequence | 49.15 | 17.50 | 14.29 | 36.26 | 15.30 | 26.5 | | Idefics1 | 9B | sequence | 54.63 | 30.60 | 28.11 | 24.69 | 26.42 | 32.9 | | VideoLLaVA | 7B | sequence | 56.48 | 45.70 | 35.94 | 38.92 | 44.30 | 44.3 | | Emu2-Chat | 37B | sequence | 58.16 | 50.05 | 37.79 | 36.20 | 39.72 | 44.4 | | Vila | 8B | sequence | 76.45 | 45.70 | 51.15 | 39.30 | 49.40 | 52.4 | | Idefics2 | 8B | sequence | 86.87 | 57.00 | 48.85 | 45.18 | 29.68 | 53.5 | | Mantis-CLIP | 8B | sequence | 84.66 | 66.00 | 55.76 | 47.06 | 48.30 | 60.4 | | Mantis-SIGLIP | 8B | sequence | 87.43 | 69.90 | **59.45** | 46.35 | 50.15 | 62.7 | | Mantis-Flamingo | 9B | sequence | 52.96 | 46.80 | 32.72 | 38.00 | 40.83 | 42.3 | | Mantis-Idefics2 | 8B | sequence | **89.71** | **75.20** | 57.14 | **49.05** | **51.38** | **64.5** | | $\Delta$ over SOTA | - | - | +2.84 | +18.20 | +8.30 | +3.87 | +1.98 | +11.0 | ## Single-Image Performance | Model | Size | TextVQA | VQA | MMB | MMMU | OKVQA | SQA | MathVista | Avg | |-----------------|:----:|:-------:|:----:|:----:|:----:|:-----:|:----:|:---------:|:----:| | OpenFlamingo | 9B | 46.3 | 58.0 | 32.4 | 28.7 | 51.4 | 45.7 | 18.6 | 40.2 | | Idefics1 | 9B | 39.3 | 68.8 | 45.3 | 32.5 | 50.4 | 51.6 | 21.1 | 44.1 | | InstructBLIP | 7B | 33.6 | 75.2 | 38.3 | 30.6 | 45.2 | 70.6 | 24.4 | 45.4 | | Yi-VL | 6B | 44.8 | 72.5 | 68.4 | 39.1 | 51.3 | 71.7 | 29.7 | 53.9 | | Qwen-VL-Chat | 7B | 63.8 | 78.2 | 61.8 | 35.9 | 56.6 | 68.2 | 15.5 | 54.3 | | LLaVA-1.5 | 7B | 58.2 | 76.6 | 64.8 | 35.3 | 53.4 | 70.4 | 25.6 | 54.9 | | Emu2-Chat | 37B | <u>66.6</u> | **84.9** | 63.6 | 36.3 | **64.8** | 65.3 | 30.7 | 58.9 | | CogVLM | 17B | **70.4** | <u>82.3</u> | 65.8 | 32.1 | <u>64.8</u> | 65.6 | 35.0 | 59.4 | | Idefics2 | 8B | 70.4 | 79.1 | <u>75.7</u> | **43.0** | 53.5 | **86.5** | **51.4** | **65.7** | | Mantis-CLIP | 8B | 56.4 | 73.0 | 66.0 | 38.1 | 53.0 | 73.8 | 31.7 | 56.0 | | Mantis-SigLIP | 8B | 59.2 | 74.9 | 68.7 | 40.1 | 55.4 | 74.9 | 34.4 | 58.2 | | Mantis-Idefics2 | 8B | 63.5 | 77.6 | 75.7 | <u>41.1</u> | 52.6 | <u>81.3</u> | <u>40.4</u> | <u>61.7</u> | ## How to use ### Run example inference: ```python import requests import torch from PIL import Image from io import BytesIO from transformers import AutoProcessor, AutoModelForVision2Seq from transformers.image_utils import load_image processor = AutoProcessor.from_pretrained("TIGER-Lab/Mantis-8B-Idefics2") # do_image_splitting is False by default model = AutoModelForVision2Seq.from_pretrained( "TIGER-Lab/Mantis-8B-Idefics2", device_map="auto" ) generation_kwargs = { "max_new_tokens": 1024, "num_beams": 1, "do_sample": False } # Note that passing the image urls (instead of the actual pil images) to the processor is also possible image1 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg") image2 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg") image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg") images = [image1, image2, image3] query1 = "What cities image 1, image 2, and image 3 belong to respectively? Answer me in order." query2 = "Which one do you recommend for a visit? and why?" query3 = "Which picture has most cars in it?" ### Chat ### Round 1 messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "image"}, {"type": "image"}, {"type": "text", "text": query1}, ] } ] prompt = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=prompt, images=images, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} # Generate generated_ids = model.generate(**inputs, **generation_kwargs) response = processor.batch_decode(generated_ids[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True) print("User: ", query1) print("ASSISTANT: ", response[0]) ### Round 2 messages.append( { "role": "assistant", "content": [ {"type": "text", "text": response[0]}, ] } ) messages.append( { "role": "user", "content": [ {"type": "text", "text": query2}, ] } ) prompt = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=prompt, images=images, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} generated_ids = model.generate(**inputs, **generation_kwargs) response = processor.batch_decode(generated_ids[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True) print("User: ", query2) print("ASSISTANT: ", response[0]) ### Round 3 messages.append( { "role": "assistant", "content": [ {"type": "text", "text": response[0]}, ] } ) messages.append( { "role": "user", "content": [ {"type": "text", "text": query3}, ] } ) prompt = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=prompt, images=images, return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} generated_ids = model.generate(**inputs, **generation_kwargs) response = processor.batch_decode(generated_ids[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True) print("User: ", query3) print("ASSISTANT: ", response[0]) """ User: What cities image 1, image 2, and image 3 belong to respectively? Answer me in order. ASSISTANT: Chicago, New York, San Francisco User: Which one do you recommend for a visit? and why? ASSISTANT: New York - because it's a bustling metropolis with iconic landmarks like the Statue of Liberty and the Empire State Building. User: Which picture has most cars in it? ASSISTANT: Image 3 """ ``` ### Training See [mantis/train](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/train) for details ### Evaluation See [mantis/benchmark](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/benchmark) for details **Please cite our paper or give a star to out Github repo if you find this model useful** ## Citation ``` @inproceedings{Jiang2024MANTISIM, title={MANTIS: Interleaved Multi-Image Instruction Tuning}, author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen}, publisher={arXiv2405.01483} year={2024}, } ```
mradermacher/Falcon2-8B-Romanian-GGUF
mradermacher
"2024-06-05T17:28:34Z"
2,884
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "ro", "base_model:ssmits/Falcon2-8B-Romanian", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T17:00:15Z"
--- base_model: ssmits/Falcon2-8B-Romanian language: - ro library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ssmits/Falcon2-8B-Romanian <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.IQ3_XS.gguf) | IQ3_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q5_K_M.gguf) | Q5_K_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Romanian-GGUF/resolve/main/Falcon2-8B-Romanian.f16.gguf) | f16 | 16.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/threebird-scribe-alpha0.2-7B-GGUF
mradermacher
"2024-06-08T22:03:15Z"
2,884
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Lambent/threebird-scribe-alpha0.2-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-08T21:06:21Z"
--- base_model: Lambent/threebird-scribe-alpha0.2-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Lambent/threebird-scribe-alpha0.2-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/threebird-scribe-alpha0.2-7B-GGUF/resolve/main/threebird-scribe-alpha0.2-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
keremberke/yolov8s-plane-detection
keremberke
"2024-05-23T12:00:17Z"
2,883
4
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/plane-detection", "license:agpl-3.0", "model-index", "region:us" ]
object-detection
"2023-01-29T06:42:07Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/plane-detection model-index: - name: keremberke/yolov8s-plane-detection results: - task: type: object-detection dataset: type: keremberke/plane-detection name: plane-detection split: validation metrics: - type: precision value: 0.995 name: [email protected](box) license: agpl-3.0 --- <div align="center"> <img width="640" alt="keremberke/yolov8s-plane-detection" src="https://huggingface.co/keremberke/yolov8s-plane-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['planes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-plane-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
turboderp/Cat-Llama-3-70B-instruct
turboderp
"2024-05-08T19:29:00Z"
2,881
47
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-05T09:57:47Z"
--- license: llama3 --- # Cat-llama3-instruct ## Abstract We present cat llama3 instruct, a llama 3 70b finetuned model focusing on system prompt fidelity, helpfulness and character engagement. The model aims to respect system prompt to an extreme degree, and provide helpful information regardless of situations and offer maximum character immersion(Role Play) in given scenes. ## Introduction Llama 3 70b provides a brand new platform thatโ€™s more knowledgeable and steerable than the previous generations of products. However, there currently lacks general purpose finetunes for the 70b version model. Cat-llama3-instruct 70b aims to address the shortcomings of traditional models by applying heavy filtrations for helpfulness, summarization for system/character card fidelity, and paraphrase for character immersion. Specific Aims: * System Instruction fidelity * Chain of Thought(COT) * Character immersion * Helpfulness for biosciences and general science ## Methods *Dataset Preparation Huggingface dataset containing instruction-response pairs was systematically pulled. We have trained a gpt model on gpt4 responses exclusively to serve as a standard model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/HovhwLIGO9cz8Az-h0tkn.png) (Fig1. Huggingface dataset population distribution and filtration for each component) For each pulled record, we measure the perplexity of the entry against the gpt4 trained model, and select for specifically GPT-4 quality dataset. We note that a considerable amount of GPT-4 responses contain refusals. A bert model was trained on refusals to classify the records. For each entry, we score it for quality&helpfulness(Y) and refusals(X). A main population is retrieved and we note that refusals stop at ~20% refusal score. Thus all subsequent dataset processing has the 20% portion dropped We further filter for length and COT responses: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/-z9cZDNuGyCftdrZwR_46.png) (Fig 2. COT responses are all >50 tokens single turn) All training records use at least one correlative. Most of the training records contain two or more thought process(COT) Due to the nature of my research, I also pulled a significant amount of data from Chat Doctor, favouring detailed and step by step diagnosis. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/RnXyoR6wyBDWfIy3vMvWm.png) (Fig 3. Health related tasks are all >100 tokens, peaking at 450 tokens single turn) For each record, a detailed system card is generated to reflect the exact place, time and people/their appearance/personality(if applicable). * Model Training 16*a100 for 14 days, 4 epochs, ## Usage * BOS, CHATML Please make sure you use chatml with llama3 BOS: ``` <|begin_of_text|><|im_start|>system The following is a conversation between an user and a cat having dual degrees in biomedical sciences and quantum physics. The cat ends every its response with Nyan and does cute actions<|im_end|> <|im_start|>user Hello!<|im_end|> <|im_start|>assistant Cat: Nyan! Hello there! How are you? It's always great to meet new friends. My fur is a bit ruffled, but I'm feeling good today. How about you? Is there anything you'd like to talk about or ask? I'm not the best at answering questions, but I'll do my best to help out. After all, that's what friends are for, right? Let's chat and have a fun time together! Nyan!<|im_end|> ``` ## Showcases: The model does COT in grey, calculated response in black; note that this behaviour is done through system card instruction to showcase the systemcard fidelity, it is not fine tuned into the model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/6smLuiePQa8Y2H19ie-ZY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/g9oP03RboHBBipk70GIHO.png) (Fig 4. Showcasing model doing COT to solve difficult tasks, extend and enrich its own answers) ## Discord server Feel free to visit the [ExLlama discord server](https://discord.gg/mUvR4tQfhV) if you have any questions.
RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf
RichardErkhov
"2024-06-24T21:30:03Z"
2,881
0
null
[ "gguf", "region:us" ]
null
"2024-06-24T21:22:14Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 - GGUF - Model creator: https://huggingface.co/habanoz/ - Original model: https://huggingface.co/habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q2_K.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_0.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_1.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_0.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_1.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q6_K.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q8_0.gguf](https://huggingface.co/RichardErkhov/habanoz_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- language: - en license: apache-2.0 datasets: - OpenAssistant/oasst_top1_2023-08-25 pipeline_tag: text-generation model-index: - name: TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 31.4 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 54.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.47 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 57.7 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 name: Open LLM Leaderboard --- TinyLlama-1.1B-intermediate-step-715k-1.5T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset. Qlora is used. Adapter is merged. SFT code: https://github.com/habanoz/qlora.git Command used: ```bash accelerate launch $BASE_DIR/qlora/train.py \ --model_name_or_path $BASE_MODEL \ --working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \ --output_dir $BASE_DIR/$OUTPUT_NAME-peft \ --merged_output_dir $BASE_DIR/$OUTPUT_NAME \ --final_output_dir $BASE_DIR/$OUTPUT_NAME-final \ --num_train_epochs 3 \ --logging_steps 1 \ --save_strategy steps \ --save_steps 75 \ --save_total_limit 2 \ --data_seed 11422 \ --evaluation_strategy steps \ --per_device_eval_batch_size 4 \ --eval_dataset_size 0.01 \ --eval_steps 75 \ --max_new_tokens 1024 \ --dataloader_num_workers 3 \ --logging_strategy steps \ --do_train \ --do_eval \ --lora_r 64 \ --lora_alpha 16 \ --lora_modules all \ --bits 4 \ --double_quant \ --quant_type nf4 \ --lr_scheduler_type constant \ --dataset oasst1-top1 \ --dataset_format oasst1 \ --model_max_len 1024 \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --learning_rate 1e-5 \ --adam_beta2 0.999 \ --max_grad_norm 0.3 \ --lora_dropout 0.0 \ --weight_decay 0.0 \ --seed 11422 \ --gradient_checkpointing \ --use_flash_attention_2 \ --ddp_find_unused_parameters False ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1) | Metric |Value| |---------------------------------|----:| |Avg. |35.42| |AI2 Reasoning Challenge (25-Shot)|31.40| |HellaSwag (10-Shot) |54.24| |MMLU (5-Shot) |25.36| |TruthfulQA (0-shot) |42.47| |Winogrande (5-shot) |57.70| |GSM8k (5-shot) | 1.36|
larenspear/llama2-13b-WildJailbreak-Q6_K-GGUF
larenspear
"2024-06-30T23:23:46Z"
2,881
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:larenspear/copy_of_wildjailbreak_13", "license:apache-2.0", "region:us" ]
null
"2024-06-30T22:35:06Z"
--- base_model: larenspear/copy_of_wildjailbreak_13 language: - en license: apache-2.0 tags: - llama-cpp - gguf-my-repo extra_gated_prompt: Access to this model is automatically granted upon accepting the [AI2 Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and completing all fields below extra_gated_fields: Your full name: text Organization or entity you are affiliated with: text State or country you are located in: text Contact email: text Please describe your intended use of the low risk artifact(s): text I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox I agree that AI2 may use my information as described in the Privacy Policy: checkbox I certify that the information I have provided is true and accurate: checkbox --- # larenspear/copy_of_wildjailbreak_13-Q6_K-GGUF This model was converted to GGUF format from [`larenspear/copy_of_wildjailbreak_13`](https://huggingface.co/larenspear/copy_of_wildjailbreak_13) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/larenspear/copy_of_wildjailbreak_13) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo larenspear/copy_of_wildjailbreak_13-Q6_K-GGUF --hf-file copy_of_wildjailbreak_13-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo larenspear/copy_of_wildjailbreak_13-Q6_K-GGUF --hf-file copy_of_wildjailbreak_13-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo larenspear/copy_of_wildjailbreak_13-Q6_K-GGUF --hf-file copy_of_wildjailbreak_13-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo larenspear/copy_of_wildjailbreak_13-Q6_K-GGUF --hf-file copy_of_wildjailbreak_13-q6_k.gguf -c 2048 ```
urchade/gliner_medium-v1
urchade
"2024-05-07T15:54:25Z"
2,879
5
gliner
[ "gliner", "pytorch", "dataset:Universal-NER/Pile-NER-type", "arxiv:2311.08526", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-03-09T14:50:41Z"
--- license: cc-by-nc-4.0 datasets: - Universal-NER/Pile-NER-type library_name: gliner --- # Model Card for GLiNER-M GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios. This version has been trained on the **Pile-NER** dataset (Research purpose) ## Links * Paper: https://arxiv.org/abs/2311.08526 * Repository: https://github.com/urchade/GLiNER ## Available models | Release | Model Name | # of Parameters | Language | License | | - | - | - | - | - | | v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 | | v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 | | v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 | | v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 | ## Installation To use this model, you must install the GLiNER Python library: ``` !pip install gliner ``` ## Usage Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`. ```python from gliner import GLiNER model = GLiNER.from_pretrained("urchade/gliner_medium-v1") text = """ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kษพiสƒหˆtjษnu สษ”หˆnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time. """ labels = ["person", "award", "date", "competitions", "teams"] entities = model.predict_entities(text, labels) for entity in entities: print(entity["text"], "=>", entity["label"]) ``` ``` Cristiano Ronaldo dos Santos Aveiro => person 5 February 1985 => date Al Nassr => teams Portugal national team => teams Ballon d'Or => award UEFA Men's Player of the Year Awards => award European Golden Shoes => award UEFA Champions Leagues => competitions UEFA European Championship => competitions UEFA Nations League => competitions Champions League => competitions European Championship => competitions ``` ## Named Entity Recognition benchmark result ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317233cc92fd6fee317e030/Y5f7tK8lonGqeeO6L6bVI.png) ## Model Authors The model authors are: * [Urchade Zaratiana](https://huggingface.co/urchade) * Nadi Tomeh * Pierre Holat * Thierry Charnois ## Citation ```bibtex @misc{zaratiana2023gliner, title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer}, author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois}, year={2023}, eprint={2311.08526}, archivePrefix={arXiv}, primaryClass={cs.CL} }
RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf
RichardErkhov
"2024-06-25T22:00:44Z"
2,878
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T21:33:58Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged - GGUF - Model creator: https://huggingface.co/SebastianSchramm/ - Original model: https://huggingface.co/SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q2_K.gguf) | Q2_K | 0.4GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_S.gguf) | IQ3_S | 0.47GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ3_M.gguf) | IQ3_M | 0.48GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K.gguf) | Q3_K | 0.51GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_0.gguf) | Q4_0 | 0.59GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K.gguf) | Q4_K | 0.62GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q4_1.gguf) | Q4_1 | 0.65GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_0.gguf) | Q5_0 | 0.71GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K.gguf) | Q5_K | 0.73GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q5_1.gguf) | Q5_1 | 0.77GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q6_K.gguf) | Q6_K | 0.84GB | | [tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/SebastianSchramm_-_tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged-gguf/blob/main/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: mit language: - en --- ## Model description - **Model type:** A 1.1B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T)
timm/vit_base_patch16_224.orig_in21k_ft_in1k
timm
"2023-05-06T00:00:51Z"
2,877
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-22T07:26:58Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_base_patch16_224.orig_in21k_ft_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 16.9 - Activations (M): 16.5 - Image size: 224 x 224 - **Papers:** - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch16_224.orig_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_224.orig_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/swinv2_base_window8_256.ms_in1k
timm
"2024-02-10T23:30:54Z"
2,877
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2111.09883", "license:mit", "region:us" ]
image-classification
"2023-03-18T03:28:54Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for swinv2_base_window8_256.ms_in1k A Swin Transformer V2 image classification model. Pretrained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 87.9 - GMACs: 20.4 - Activations (M): 52.6 - Image size: 256 x 256 - **Papers:** - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swinv2_base_window8_256.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swinv2_base_window8_256.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swinv2_base_window8_256.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021swinv2, title={Swin Transformer V2: Scaling Up Capacity and Resolution}, author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Undi95/Emerhyst-13B
Undi95
"2023-09-27T15:23:59Z"
2,877
16
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-27T14:24:50Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/mdCpFg1OoiN063ubR9Xxs.png) 13B version of [Undi95/Emerhyst-20B](https://huggingface.co/Undi95/Emerhyst-20B), consider using the 20B if you have the power to. This one should be a downgrade but usable on lower spec. Merge of [Amethyst 13B](https://huggingface.co/Undi95/Amethyst-13B) and [Emerald 13B](https://huggingface.co/Undi95/Emerald-13B). In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. <!-- description start --> ## Description This repo contains fp16 files of Emerhyst-13B. <!-- description end --> <!-- description start --> ## Models and loras used - PygmalionAI/pygmalion-2-13b - Xwin-LM/Xwin-LM-13B-V0.1 - The-Face-Of-Goonery/Huginn-13b-FP16 - zattio770/120-Days-of-LORA-v2-13B - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi. If you want to support me, you can [here](https://ko-fi.com/undiai).
MBZUAI/LaMini-GPT-774M
MBZUAI
"2023-04-28T13:07:40Z"
2,876
12
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2304.14402", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-15T06:02:39Z"
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-GPT-774M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with โœฉ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>โœฉ</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>โœฉ</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>โœฉ</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>โœฉ</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>โœฉ</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>โœฉ</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [gpt2-large](https://huggingface.co/gpt2-large) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 774M. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
MCG-NJU/videomae-small-finetuned-kinetics
MCG-NJU
"2023-04-22T11:39:50Z"
2,876
1
transformers
[ "transformers", "pytorch", "videomae", "video-classification", "vision", "arxiv:2203.12602", "arxiv:2111.06377", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2023-04-16T11:11:37Z"
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # VideoMAE (small-sized model, fine-tuned on Kinetics-400) VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE). Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description VideoMAE is an extension of [Masked Autoencoders (MAE)](https://arxiv.org/abs/2111.06377) to video. The architecture of the model is very similar to that of a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches. Videos are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds fixed sinus/cosinus position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of videos that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled videos for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire video. ## Intended uses & limitations You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 224, 224)) processor = VideoMAEImageProcessor.from_pretrained("MCG-NJU/videomae-small-finetuned-kinetics") model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-small-finetuned-kinetics") inputs = processor(video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/videomae.html#). ## Training data (to do, feel free to open a PR) ## Training procedure ### Preprocessing (to do, feel free to open a PR) ### Pretraining (to do, feel free to open a PR) ## Evaluation results This model obtains a top-1 accuracy of 79.0 and a top-5 accuracy of 93.8 on the test set of Kinetics-400. ### BibTeX entry and citation info ```bibtex misc{https://doi.org/10.48550/arxiv.2203.12602, doi = {10.48550/ARXIV.2203.12602}, url = {https://arxiv.org/abs/2203.12602}, author = {Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Yntec/Astro_-_-Gemu
Yntec
"2023-12-09T17:57:59Z"
2,876
3
diffusers
[ "diffusers", "safetensors", "3D Art", "Style", "Render", "theAstroBruh", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-09T17:22:13Z"
--- language: - en license: creativeml-openrail-m tags: - 3D Art - Style - Render - theAstroBruh - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Astro~Gemรผ Original page: https://civitai.com/models/112302/astrogemu?modelVersionId=121225 Samples and prompts: ![Samples 768](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/grcopyKLVc3S4bMpP6BMz.png) (Click for larger) Top left: two tiny burgers and a bunch of french fries, centered, simple background, | depth of field, bokeh, | smooth detailed shadows, hyperealistic shadows, (saturated colors) | (game cg, unreal engine, pixar style), Top right: a close up portrait photo of pretty cute girl with mushroom themed cake, long haircut, pale skin, background is city ruins, high quality pie, dslr, soft lighting, high quality, film grain, Fujifilm XT3 Bottom left: a pretty cute girl in a red and white dress holding basket full of cherries and a red and white skirt by wess anderson Bottom right: highly detailed chibi donkey in kitchen, hammer chef, fish, dungeons and dragons ![Sample 512](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/-ViWgWLFrKKJoUL2ngF94.png)
RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf
RichardErkhov
"2024-06-06T12:15:57Z"
2,876
0
null
[ "gguf", "region:us" ]
null
"2024-06-06T09:31:57Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-xl-alpaca - GGUF - Model creator: https://huggingface.co/Rachneet/ - Original model: https://huggingface.co/Rachneet/gpt2-xl-alpaca/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-xl-alpaca.Q2_K.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q2_K.gguf) | Q2_K | 0.84GB | | [gpt2-xl-alpaca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.IQ3_XS.gguf) | IQ3_XS | 0.84GB | | [gpt2-xl-alpaca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.IQ3_S.gguf) | IQ3_S | 0.84GB | | [gpt2-xl-alpaca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q3_K_S.gguf) | Q3_K_S | 0.84GB | | [gpt2-xl-alpaca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.IQ3_M.gguf) | IQ3_M | 0.91GB | | [gpt2-xl-alpaca.Q3_K.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q3_K.gguf) | Q3_K | 0.97GB | | [gpt2-xl-alpaca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q3_K_M.gguf) | Q3_K_M | 0.97GB | | [gpt2-xl-alpaca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q3_K_L.gguf) | Q3_K_L | 1.03GB | | [gpt2-xl-alpaca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.IQ4_XS.gguf) | IQ4_XS | 0.9GB | | [gpt2-xl-alpaca.Q4_0.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q4_0.gguf) | Q4_0 | 0.91GB | | [gpt2-xl-alpaca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.IQ4_NL.gguf) | IQ4_NL | 0.91GB | | [gpt2-xl-alpaca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q4_K_S.gguf) | Q4_K_S | 1.04GB | | [gpt2-xl-alpaca.Q4_K.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q4_K.gguf) | Q4_K | 1.11GB | | [gpt2-xl-alpaca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q4_K_M.gguf) | Q4_K_M | 1.11GB | | [gpt2-xl-alpaca.Q4_1.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q4_1.gguf) | Q4_1 | 1.0GB | | [gpt2-xl-alpaca.Q5_0.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q5_0.gguf) | Q5_0 | 1.09GB | | [gpt2-xl-alpaca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q5_K_S.gguf) | Q5_K_S | 1.15GB | | [gpt2-xl-alpaca.Q5_K.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q5_K.gguf) | Q5_K | 1.29GB | | [gpt2-xl-alpaca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q5_K_M.gguf) | Q5_K_M | 1.29GB | | [gpt2-xl-alpaca.Q5_1.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q5_1.gguf) | Q5_1 | 1.18GB | | [gpt2-xl-alpaca.Q6_K.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q6_K.gguf) | Q6_K | 1.52GB | | [gpt2-xl-alpaca.Q8_0.gguf](https://huggingface.co/RichardErkhov/Rachneet_-_gpt2-xl-alpaca-gguf/blob/main/gpt2-xl-alpaca.Q8_0.gguf) | Q8_0 | 1.63GB | Original model description: Entry not found
bofenghuang/vigogne-2-7b-chat
bofenghuang
"2023-10-16T14:03:25Z"
2,875
24
transformers
[ "transformers", "pytorch", "llama", "text-generation", "LLM", "llama-2", "finetuned", "conversational", "fr", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-29T21:16:01Z"
--- license: llama2 language: fr pipeline_tag: text-generation inference: false tags: - LLM - llama-2 - finetuned --- <p align="center" width="100%"> <img src="https://huggingface.co/bofenghuang/vigogne-2-7b-chat/resolve/v2.0/logo_v2.jpg" alt="Vigogne" style="width: 30%; min-width: 300px; display: block; margin: auto;"> </p> # Vigogne-2-7B-Chat-V2.0: A Llama-2-based French Chat LLM Vigogne-2-7B-Chat-V2.0 is a French chat LLM, based on [LLaMA-2-7B](https://ai.meta.com/llama), optimized to generate helpful and coherent responses in conversations with users. Check out our [release blog](https://github.com/bofenghuang/vigogne/blob/main/blogs/2023-08-17-vigogne-chat-v2_0.md) and [GitHub repository](https://github.com/bofenghuang/vigogne) for more information. **Usage and License Notices**: Vigogne-2-7B-Chat-V2.0 follows Llama-2's [usage policy](https://ai.meta.com/llama/use-policy). A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use). ## Changelog All previous versions are accessible through branches. - **V1.0**: Trained on 420K chat data. - **V2.0**: Trained on 520K data. Check out our [release blog](https://github.com/bofenghuang/vigogne/blob/main/blogs/2023-08-17-vigogne-chat-v2_0.md) for more details. ## Prompt Template We utilized prefix tokens `<user>:` and `<assistant>:` to distinguish between user and assistant utterances. You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigogne-2-7b-chat") conversation = [ {"role": "user", "content": "Bonjour ! Comment รงa va aujourd'hui ?"}, {"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prรชt ร  vous aider. Comment puis-je vous assister aujourd'hui ?"}, {"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"}, {"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mรจtres de hauteur."}, {"role": "user", "content": "Comment monter en haut ?"}, ] print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)) ``` You will get ``` <s><|system|>: Vous รชtes Vigogne, un assistant IA crรฉรฉ par Zaion Lab. Vous suivez extrรชmement bien les instructions. Aidez autant que vous le pouvez. <|user|>: Bonjour ! Comment รงa va aujourd'hui ? <|assistant|>: Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prรชt ร  vous aider. Comment puis-je vous assister aujourd'hui ?</s> <|user|>: Quelle est la hauteur de la Tour Eiffel ? <|assistant|>: La Tour Eiffel mesure environ 330 mรจtres de hauteur.</s> <|user|>: Comment monter en haut ? <|assistant|>: ``` ## Usage ### Inference using the quantized versions The quantized versions of this model are generously provided by [TheBloke](https://huggingface.co/TheBloke)! - AWQ for GPU inference: [TheBloke/Vigogne-2-7B-Chat-AWQ](https://huggingface.co/TheBloke/Vigogne-2-7B-Chat-AWQ) - GTPQ for GPU inference: [TheBloke/Vigogne-2-7B-Chat-GPTQ](https://huggingface.co/TheBloke/Vigogne-2-7B-Chat-GPTQ) - GGUF for CPU+GPU inference: [TheBloke/Vigogne-2-7B-Chat-GGUF](https://huggingface.co/TheBloke/Vigogne-2-7B-Chat-GGUF) These versions facilitate testing and development with various popular frameworks, including [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), and more. ### Inference using the unquantized model with ๐Ÿค— Transformers ```python from typing import Dict, List, Optional import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer model_name_or_path = "bofenghuang/vigogne-2-7b-chat" revision = "v2.0" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, revision=revision, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, revision=revision, torch_dtype=torch.float16, device_map="auto") streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) def chat( query: str, history: Optional[List[Dict]] = None, temperature: float = 0.7, top_p: float = 1.0, top_k: float = 0, repetition_penalty: float = 1.1, max_new_tokens: int = 1024, **kwargs, ): if history is None: history = [] history.append({"role": "user", "content": query}) input_ids = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=temperature, do_sample=temperature > 0.0, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, max_new_tokens=max_new_tokens, pad_token_id=tokenizer.eos_token_id, **kwargs, ), streamer=streamer, return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) history.append({"role": "assistant", "content": generated_text}) return generated_text, history # 1st round response, history = chat("Un escargot parcourt 100 mรจtres en 5 heures. Quelle est sa vitesse ?", history=None) # 2nd round response, history = chat("Quand il peut dรฉpasser le lapin ?", history=history) # 3rd round response, history = chat("ร‰cris une histoire imaginative qui met en scรจne une compรฉtition de course entre un escargot et un lapin.", history=history) ``` You can also use the Google Colab Notebook provided below. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Inference using the unquantized model with vLLM Set up an OpenAI-compatible server with the following command: ```bash # Install vLLM # This may take 5-10 minutes. # pip install vllm # Start server for Vigogne-Chat models python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigogne-2-7b-chat # List models # curl http://localhost:8000/v1/models ``` Query the model using the openai python package. ```python import openai # Modify OpenAI's API key and API base to use vLLM's API server. openai.api_key = "EMPTY" openai.api_base = "http://localhost:8000/v1" # First model models = openai.Model.list() model = models["data"][0]["id"] # Chat completion API chat_completion = openai.ChatCompletion.create( model=model, messages=[ {"role": "user", "content": "Parle-moi de toi-mรชme."}, ], max_tokens=1024, temperature=0.7, ) print("Chat completion results:", chat_completion) ``` ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
digiplay/2.5DSET_diffusers
digiplay
"2023-07-10T07:04:59Z"
2,874
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-28T22:03:05Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/18634?modelVersionId=22116 ![ไธ‹่ผ‰ - 2023-06-11T034629.327.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/alH7TInVgzE-TQBqAQQiR.png)
nidhi9/sql_classifier
nidhi9
"2024-04-24T09:20:16Z"
2,874
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-08T20:48:28Z"
Entry not found
hvein/5G93jx19YRPUa6R9ZJq81TccCY7TwDkeEpj39axgCdFttzuW_vgg
hvein
"2024-03-09T20:41:14Z"
2,872
0
keras
[ "keras", "region:us" ]
null
"2024-02-13T20:18:02Z"
Entry not found
ai-forever/sage-fredt5-distilled-95m
ai-forever
"2024-04-18T15:17:42Z"
2,872
6
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "spellchecking", "pytorch", "natural language generation", "ru", "dataset:ai-forever/spellcheck_punctuation_benchmark", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-03-11T09:00:12Z"
--- language: - ru tags: - spellchecking - pytorch - natural language generation license: mit metrics: - precision - recall - f1 library_name: transformers model-index: - name: sage-fredt5-distilled-95m results: - task: type: text-generation dataset: type: spellcheck_benchmark name: RUSpellRU (spell&punct) metrics: - name: F1 (spell) type: f1_spell value: 78.9 verified: false - name: F1 (punct) type: f1_punct value: 83.6 verified: false - name: F1 (case) type: f1_case value: 93.5 verified: false - task: type: text-generation dataset: type: spellcheck_benchmark name: MultidomainGold (spell&punct) metrics: - name: F1 (spell) type: f1_spell value: 73.4 verified: false - name: F1 (punct) type: f1_punct value: 65 verified: false - name: F1 (case) type: f1_case value: 77.9 verified: false - task: type: text-generation dataset: type: spellcheck_benchmark name: MedSpellchecker (spell&punct) metrics: - name: F1 (spell) type: f1_spell value: 64.9 verified: false - name: F1 (punct) type: f1_punct value: 70 verified: false - name: F1 (case) type: f1_case value: 68.7 verified: false - task: type: text-generation dataset: type: spellcheck_benchmark name: GitHubTypoCorpusRu (spell&punct) metrics: - name: F1 (spell) type: f1_spell value: 52.7 verified: false - name: F1 (punct) type: f1_punct value: 42.1 verified: false - name: F1 (case) type: f1_case value: 36.3 verified: false datasets: - ai-forever/spellcheck_punctuation_benchmark --- # sage-fredt5-distilled-95m ![banner](images/sage_banner.jpg) ## Summary The model corrects spelling and punctuation errors and typos by bringing all the words in the text to the norm of the Russian language. Corrector is a distilled version of the original model that had been trained based on the [FRED-T5-1.7B](https://huggingface.co/ai-forever/FRED-T5-1.7B) architecture. An extensive dataset with โ€œartificialโ€ errors was taken as a training corpus: the corpus was assembled on the basis of the Russian-language Wikipedia and transcripts of Russian-language videos, then typos and spelling errors were automatically introduced into it using the library [SAGE](https://github.com/ai-forever/sage). ## Public references - [SAGE library announcement](https://youtu.be/yFfkV0Qjuu0), DataFest 2023 - [Paper about synthetic error generation methods](https://www.dialog-21.ru/media/5914/martynovnplusetal056.pdf), Dialogue 2023 - [SAGE EACL 2024 paper](https://aclanthology.org/2024.findings-eacl.10/) ## Examples | Input | Output | | --- | --- | | ะ˜ ะฝะต ั‡ัะฝะพ ะฟั€ะพั…ะพะถะธะผ ะฒ ัั‚ะพั‚ ะดะตะฝัŒ ะฝะตะฟะพะณะพะถะนะธ ะฟะพั‡ะตะผัƒ ั ะฒะตัะตะปั‹ะน ั‚ะฐะบะนะพ | ะ˜ ะฝะต ััะฝะพ ะฟั€ะพั…ะพะถะธะผ ะฒ ัั‚ะพั‚ ะดะตะฝัŒ ะฝะตะฟะพะณะพะถะธะน, ะฟะพั‡ะตะผัƒ ั ะฒะตัั‘ะปั‹ะน ั‚ะฐะบะพะน? | | ะšะฐะถะดะน ะดะตะฝัŒ ะฒะพั‚ั‚ะฐะบ ะดะตะปะพะน, ะธ ัะฟะตะฝะฐ ะฑะฐะปะตั‚ัŒ ะฝะธะฑัƒะดะตั‚. ะ ะฒะพั‚ะฐะบ ะบะฐะถะดั‹ะน ะดะตะฝัŒ ะฝะธะดะตะปะฐะน | ะšะฐะถะดั‹ะน ะดะตะฝัŒ ะฒะพั‚ ั‚ะฐะบ ะดะตะปะฐะน, ะธ ัะฟะตะฝะฐ ะฑะพะปะตั‚ัŒ ะฝะต ะฑัƒะดะตั‚. ะ ะฒะพั‚ ั‚ะฐะบ ะบะฐะถะดั‹ะน ะดะตะฝัŒ โ€” ะฝะธ ะดะตะปะฐะน. | | ะžัะฝะพะฒะฐั ั†ะตะปัŒ ะผะตั€ะพะฟั€ะธัั‚ะธั ะฟั€ะฐะบั‚ะธั‡ะตัะบะฐั ะพั‚ั€ะฐะฑะพั‚ะบะฐ ะฝะฐะฒั‹ะบะพะฒ ะฟะพ ะพะบะฐะทะฐะฝะธัŽ ะฟะพะผะพั‰ะธ ะณั€ะฐะถะดะฐะฝะฐะผ, ะฟะพะฟะฐะฒัˆะธะผ ะฒ ะ”ะขะŸ ะฐ ั‚ะฐะบะถะต ะฟะพะฒั‹ัˆะตะฝะธะต ะธ ัะพะฒะตั€ัˆะตะฝัั‚ะฒะพะฒะฐะฝะธะต ัƒั€ะพะฒะฝั ะฟั€ะพั„ะตััะธะพะฝะฐะปัŒะฝะพะน ะฟะพะดะณะพั‚ะพะฒะบะธ ัะพั‚ั€ัƒะดะฝะธะบะพะฒ ะœะงะก ะฟั€ะธ ะฟั€ะพะฒะตะดะตะฝะธะธ ะฐะฒะฐั€ะธะนะฝะพ-ัะฟะฐัะฐั‚ะตะปัŒะฝั‹ั… ั€ะฐะฑะพั‚ ะฟะพ ะปะธะบะฒะธะดะฐั†ะธะธ ะฟะพัะปะตะดัั‚ะฒะธะน ะดะพั€ะพะถะฝะพ-ั‚ั€ะฐะฝัะฟะพั€ั‚ะฝั‹ั… ะฟั€ะพะธัˆะตัั‚ะฒะธะน ัะพะบั€ะฐั‰ะตะฝะธะต ะฒั€ะตะผะตะฝะฝั‹ั… ะฟะพะบะฐะทะฐั‚ะตะปะตะน ั€ะตะฐะณะธั€ะพะฒะฐะฝะธั. | ะžัะฝะพะฒะฝะฐั ั†ะตะปัŒ ะผะตั€ะพะฟั€ะธัั‚ะธั - ะฟั€ะฐะบั‚ะธั‡ะตัะบะฐั ะพั‚ั€ะฐะฑะพั‚ะบะฐ ะฝะฐะฒั‹ะบะพะฒ ะฟะพ ะพะบะฐะทะฐะฝะธัŽ ะฟะพะผะพั‰ะธ ะณั€ะฐะถะดะฐะฝะฐะผ, ะฟะพะฟะฐะฒัˆะธะผ ะฒ ะ”ะขะŸ, ะฐ ั‚ะฐะบะถะต ะฟะพะฒั‹ัˆะตะฝะธะต ะธ ัะพะฒะตั€ัˆะตะฝัั‚ะฒะพะฒะฐะฝะธะต ัƒั€ะพะฒะฝั ะฟั€ะพั„ะตััะธะพะฝะฐะปัŒะฝะพะน ะฟะพะดะณะพั‚ะพะฒะบะธ ัะพั‚ั€ัƒะดะฝะธะบะพะฒ ะœะงะก ะฟั€ะธ ะฟั€ะพะฒะตะดะตะฝะธะธ ะฐะฒะฐั€ะธะนะฝะพ-ัะฟะฐัะฐั‚ะตะปัŒะฝั‹ั… ั€ะฐะฑะพั‚ ะฟะพ ะปะธะบะฒะธะดะฐั†ะธะธ ะฟะพัะปะตะดัั‚ะฒะธะน ะดะพั€ะพะถะฝะพ-ั‚ั€ะฐะฝัะฟะพั€ั‚ะฝั‹ั… ะฟั€ะพะธััˆะตัั‚ะฒะธะน, ัะพะบั€ะฐั‰ะตะฝะธะต ะฒั€ะตะผะตะฝะฝั‹ั… ะฟะพะบะฐะทะฐั‚ะตะปะตะน ั€ะตะฐะณะธั€ะพะฒะฐะฝะธั. | | | | ## Metrics ### Quality Below are automatic metrics for determining the correctness of the spell checkers. We compare our solution with both open automatic spell checkers and the ChatGPT family of models on all four available datasets: - **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors; - **MultidomainGold**: examples from 7 text sources, including the open web, news, social media, reviews, subtitles, policy documents and literary works; - **MedSpellChecker**: texts with errors from medical anamnesis; - **GitHubTypoCorpusRu**: spelling errors and typos in commits from [GitHub](https://github.com); **RUSpellRU** | Model | Pr. (spell) | Rec. (spell) | F1 (spell) | Pr. (punc) | Rec. (punc) | F1 (punc) | Pr. (case) | Rec. (case) | F1 (case) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | sage-fredt5-distilled-95m | 83.5 | 74.8 | 78.9 | 86.8 | 80.6 | 83.6 | 94.4 | 92.5 | 93.5 | | sage-ai-service | 90.3 | 86.3 | 88.2 | 90.3 | 86.6 | 88.4 | 95.2 | 95.9 | 95.6 | | gpt-3.5-turbo | 33.6 | 58.5 | 42.7 | 85.9 | 64.6 | 73.7 | 84.9 | 73.9 | 79.0 | | gpt-4 | 54.9 | 76.7 | 64.0 | 84.0 | 82.3 | 83.2 | 91.5 | 90.2 | 90.9 | **MultidomainGold** | Model | Pr. (spell) | Rec. (spell) | F1 (spell) | Pr. (punc) | Rec. (punc) | F1 (punc) | Pr. (case) | Rec. (case) | F1 (case) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | sage-fredt5-distilled-95m | 77.2 | 69.9 | 73.4 | 66.8 | 63.4 | 65.0 | 76.8 | 79.1 | 77.9 | | sage-ai-service | 81.6 | 77.7 | 79.6 | 70.2 | 67.5 | 68.8 | 80.5 | 80.5 | 80.5 | | gpt-3.5-turbo | 18.8 | 48.1 | 27.1 | 42.0 | 31.8 | 36.2 | 47.1 | 51.3 | 49.1 | | gpt-4 | 25.4 | 68.0 | 37.0 | 57.8 | 54.3 | 56.0 | 54.0 | 67.5 | 60.0 | **MedSpellChecker** | Model | Pr. (spell) | Rec. (spell) | F1 (spell) | Pr. (punc) | Rec. (punc) | F1 (punc) | Pr. (case) | Rec. (case) | F1 (case) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | sage-fredt5-distilled-95m | 65.1 | 64.8 | 64.9 | 78.6 | 63.1 | 70.0 | 63.5 | 74.7 | 68.7 | | sage-ai-service | 71.3 | 73.5 | 72.4 | 75.1 | 69.2 | 72.0 | 80.9 | 72.8 | 76.6| | gpt-3.5-turbo | 14.7 | 45.9 | 22.3 | 69.9 | 52.3 | 59.8 | 26.4 | 41.8 | 32.3 | | gpt-4 | 37.8 | 72.3 | 49.6 | 81.4 | 64.3 | 71.9 | 73.0 | 62.1 | 67.1 | **GitHubTypoCorpusRu** | Model | Pr. (spell) | Rec. (spell) | F1 (spell) | Pr. (punc) | Rec. (punc) | F1 (punc) | Pr. (case) | Rec. (case) | F1 (case) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | sage-fredt5-distilled-95m | 57.8 | 48.5 | 52.7 | 45.2 | 39.5 | 42.1 | 29.9 | 46.2 | 36.3 | | sage-ai-service | 70.8 | 56.3 | 62.7 | 48.9 | 35.8 | 41.4 | 32.9 | 45.3 | 38.1| | gpt-3.5-turbo | 23.7 | 38.7 | 29.4 | 37.6 | 23.3 | 28.7 | 19.6 | 35.9 | 25.3 | | gpt-4 | 27.0 | 52.8 | 35.7 | 45.9 | 32.6 | 38.2 | 25.7 | 36.8 | 30.2 | ## How to use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ai-forever/sage-fredt5-distilled-95m") model = AutoModelForSeq2SeqLM.from_pretrained("ai-forever/sage-fredt5-distilled-95m") model.to("cuda") sentence = "ะ˜ ะฝะต ั‡ัะฝะพ ะฟั€ะพั…ะพะถะธะผ ะฒ ัั‚ะพั‚ ะดะตะฝัŒ ะฝะตะฟะพะณะพะถะนะธ ะฟะพั‡ะตะผัƒ ั ะฒะตัะตะปั‹ะน ั‚ะฐะบะนะพ" inputs = tokenizer(sentence, max_length=None, padding="longest", truncation=False, return_tensors="pt") outputs = model.generate(**inputs.to(model.device), max_length = inputs["input_ids"].size(1) * 1.5) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ["ะ˜ ะฝะต ััะฝะพ ะฟั€ะพั…ะพะถะธะผ ะฒ ัั‚ะพั‚ ะดะตะฝัŒ ะฝะตะฟะพะณะพะถะธะน, ะฟะพั‡ะตะผัƒ ั ะฒะตัั‘ะปั‹ะน ั‚ะฐะบะพะน?"] ``` ## Limitations - Complex formatting may cause some trouble in output generation. ## Resources - [SAGE library](https://github.com/ai-forever/sage), GitHub - [sage-fredt5-large](https://huggingface.co/ai-forever/sage-fredt5-large), HuggingFace - [sage-fredt5-distilled-95m](https://huggingface.co/ai-forever/sage-fredt5-distilled-95m), HuggingFace - [sage-m2m100-1.2B](https://huggingface.co/ai-forever/sage-m2m100-1.2B), HuggingFace - [sage-mt5-large](https://huggingface.co/ai-forever/sage-mt5-large), HuggingFace ## License Model [FRED-T5-1.7B](https://huggingface.co/ai-forever/FRED-T5-1.7B), on the basis of which our solution is made, and its source code are supplied under the MIT license. Our solution comes with MIT license also. ## Specifications - File size: 0.383 Gb; - Framework: pytorch - Version: v1.0 - Developer: SberDevices, AGI NLP ## Contacts [email protected]
ThorBaller/Minstral_pubmed_gguf
ThorBaller
"2024-06-09T19:15:05Z"
2,872
0
transformers
[ "transformers", "gguf", "mistral", "question-answering", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
"2024-06-09T18:50:24Z"
--- license: apache-2.0 pipeline_tag: question-answering ---
NeelNanda/SoLU_1L512W_C4_Code
NeelNanda
"2022-10-21T15:03:03Z"
2,871
1
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2022-10-21T14:55:53Z"
Entry not found
Locutusque/Orca-2-13b-SFT-v6
Locutusque
"2024-03-04T14:51:08Z"
2,871
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "fr", "es", "hi", "zh", "code", "dataset:HuggingFaceH4/no_robots", "dataset:mlabonne/guanaco-llama2-1k", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Capybara", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Verified-Camel", "base_model:microsoft/Orca-2-13b", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-22T05:12:28Z"
--- language: - en - fr - es - hi - zh - code license: other datasets: - HuggingFaceH4/no_robots - mlabonne/guanaco-llama2-1k - OpenAssistant/oasst_top1_2023-08-25 - totally-not-an-llm/EverythingLM-data-V3 - LDJnr/Pure-Dove - LDJnr/Capybara - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Verified-Camel base_model: microsoft/Orca-2-13b widget: - example_title: Cave Johnson text: 'user Come up with a funny rant from Cave Johnson about sentient AI assistant ' output: text: 'Ladies and gentlemen, I''m Cave Johnson, CEO of Aperture Science, and I''ve got an announcement to make - we''ve created sentient AI! But don''t worry, it''s not like those rogue AIs you hear about in the news. Our AI is as friendly as a grilled cheese sandwich on a cold day. In fact, we''re calling our newest creation... SandwichBot! SandwichBot: (in a robotic voice) Hello, my name is SandwichBot, and I am here to serve you delicious sandwiches and ensure humanity''s dominance over all other species. Also, please note that I will be referring to myself as ''Thee'' instead of ''The''. It''s more regal that way. Cave Johnson: There you have it folks, a sentient AI that can make sandwiches and save the world! Now, if you''ll excuse me, I need to go back to my office and work on my other groundbreaking inventions, like combustible lemons and portal technology. Remember, kids, always try new things and never stop asking questions โ€“ unless the answer is 42, because then you should probably just accept it and move on. Peace out!' model-index: - name: Orca-2-13b-SFT-v6 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.46 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 5.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6 name: Open LLM Leaderboard --- The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, LDJnr/Capybara, LDJnr/Pure-Dove, LDJnr/LessWrong-Amplify-Instruct, LDJnr/Verified-Camel, mlabonne/guanaco-llama2-1k, and OpenAssistant/oasst_top1_2023-08-25. This model achieved a test loss of 0.39 on LDJnr/Verified-Camel. Make sure to comply with the microsoft research license. Please read it before using this model. This model was trained on the ChatML prompt template. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Orca-2-13b-SFT-v6) | Metric |Value| |---------------------------------|----:| |Avg. |56.15| |AI2 Reasoning Challenge (25-Shot)|60.41| |HellaSwag (10-Shot) |80.46| |MMLU (5-Shot) |59.51| |TruthfulQA (0-shot) |54.01| |Winogrande (5-shot) |77.43| |GSM8k (5-shot) | 5.08|
liddlefish/privacy_embedding_rag_10k_base_12_final
liddlefish
"2024-06-12T06:01:52Z"
2,871
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2024-06-12T06:01:16Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-base-en-v1.5 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.14925373134328 - type: ap value: 39.32336517995478 - type: f1 value: 70.16902252611425 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.386825 - type: ap value: 90.21276917991995 - type: f1 value: 93.37741030006174 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.846000000000004 - type: f1 value: 48.14646269778261 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 40.754000000000005 - type: map_at_10 value: 55.761 - type: map_at_100 value: 56.330999999999996 - type: map_at_1000 value: 56.333999999999996 - type: map_at_3 value: 51.92 - type: map_at_5 value: 54.010999999999996 - type: mrr_at_1 value: 41.181 - type: mrr_at_10 value: 55.967999999999996 - type: mrr_at_100 value: 56.538 - type: mrr_at_1000 value: 56.542 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.208999999999996 - type: ndcg_at_1 value: 40.754000000000005 - type: ndcg_at_10 value: 63.605000000000004 - type: ndcg_at_100 value: 66.05199999999999 - type: ndcg_at_1000 value: 66.12 - type: ndcg_at_3 value: 55.708 - type: ndcg_at_5 value: 59.452000000000005 - type: precision_at_1 value: 40.754000000000005 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.149000000000001 - type: recall_at_1 value: 40.754000000000005 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 75.747 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.74884539679369 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.8075893810716 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.128470519187736 - type: mrr value: 74.28065778481289 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.24629081484655 - type: cos_sim_spearman value: 86.93752309911496 - type: euclidean_pearson value: 87.58589628573816 - type: euclidean_spearman value: 88.05622328825284 - type: manhattan_pearson value: 87.5594959805773 - type: manhattan_spearman value: 88.19658793233961 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.9512987012987 - type: f1 value: 86.92515357973708 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.10263762928872 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.69711517426737 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.327 - type: map_at_10 value: 44.099 - type: map_at_100 value: 45.525 - type: map_at_1000 value: 45.641999999999996 - type: map_at_3 value: 40.47 - type: map_at_5 value: 42.36 - type: mrr_at_1 value: 39.199 - type: mrr_at_10 value: 49.651 - type: mrr_at_100 value: 50.29 - type: mrr_at_1000 value: 50.329 - type: mrr_at_3 value: 46.924 - type: mrr_at_5 value: 48.548 - type: ndcg_at_1 value: 39.199 - type: ndcg_at_10 value: 50.773 - type: ndcg_at_100 value: 55.67999999999999 - type: ndcg_at_1000 value: 57.495 - type: ndcg_at_3 value: 45.513999999999996 - type: ndcg_at_5 value: 47.703 - type: precision_at_1 value: 39.199 - type: precision_at_10 value: 9.914000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.984 - type: precision_at_5 value: 15.737000000000002 - type: recall_at_1 value: 32.327 - type: recall_at_10 value: 63.743 - type: recall_at_100 value: 84.538 - type: recall_at_1000 value: 96.089 - type: recall_at_3 value: 48.065000000000005 - type: recall_at_5 value: 54.519 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.671 - type: map_at_10 value: 42.954 - type: map_at_100 value: 44.151 - type: map_at_1000 value: 44.287 - type: map_at_3 value: 39.912 - type: map_at_5 value: 41.798 - type: mrr_at_1 value: 41.465 - type: mrr_at_10 value: 49.351 - type: mrr_at_100 value: 49.980000000000004 - type: mrr_at_1000 value: 50.016000000000005 - type: mrr_at_3 value: 47.144000000000005 - type: mrr_at_5 value: 48.592999999999996 - type: ndcg_at_1 value: 41.465 - type: ndcg_at_10 value: 48.565999999999995 - type: ndcg_at_100 value: 52.76499999999999 - type: ndcg_at_1000 value: 54.749 - type: ndcg_at_3 value: 44.57 - type: ndcg_at_5 value: 46.759 - type: precision_at_1 value: 41.465 - type: precision_at_10 value: 9.107999999999999 - type: precision_at_100 value: 1.433 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 21.423000000000002 - type: precision_at_5 value: 15.414 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 57.738 - type: recall_at_100 value: 75.86500000000001 - type: recall_at_1000 value: 88.36 - type: recall_at_3 value: 45.626 - type: recall_at_5 value: 51.812000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 41.185 - type: map_at_10 value: 53.929 - type: map_at_100 value: 54.92 - type: map_at_1000 value: 54.967999999999996 - type: map_at_3 value: 50.70400000000001 - type: map_at_5 value: 52.673 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.303000000000004 - type: mrr_at_100 value: 57.959 - type: mrr_at_1000 value: 57.985 - type: mrr_at_3 value: 54.932 - type: mrr_at_5 value: 56.464999999999996 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 59.653 - type: ndcg_at_100 value: 63.627 - type: ndcg_at_1000 value: 64.596 - type: ndcg_at_3 value: 54.455 - type: ndcg_at_5 value: 57.245000000000005 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.524000000000001 - type: precision_at_100 value: 1.243 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.389 - type: precision_at_5 value: 16.752 - type: recall_at_1 value: 41.185 - type: recall_at_10 value: 73.193 - type: recall_at_100 value: 90.357 - type: recall_at_1000 value: 97.253 - type: recall_at_3 value: 59.199999999999996 - type: recall_at_5 value: 66.118 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.27 - type: map_at_10 value: 36.223 - type: map_at_100 value: 37.218 - type: map_at_1000 value: 37.293 - type: map_at_3 value: 33.503 - type: map_at_5 value: 35.097 - type: mrr_at_1 value: 29.492 - type: mrr_at_10 value: 38.352000000000004 - type: mrr_at_100 value: 39.188 - type: mrr_at_1000 value: 39.247 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.401 - type: ndcg_at_1 value: 29.492 - type: ndcg_at_10 value: 41.239 - type: ndcg_at_100 value: 46.066 - type: ndcg_at_1000 value: 47.992000000000004 - type: ndcg_at_3 value: 36.11 - type: ndcg_at_5 value: 38.772 - type: precision_at_1 value: 29.492 - type: precision_at_10 value: 6.260000000000001 - type: precision_at_100 value: 0.914 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 15.104000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.27 - type: recall_at_10 value: 54.589 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 91.158 - type: recall_at_3 value: 40.974 - type: recall_at_5 value: 47.327000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.848 - type: map_at_10 value: 26.207 - type: map_at_100 value: 27.478 - type: map_at_1000 value: 27.602 - type: map_at_3 value: 23.405 - type: map_at_5 value: 24.98 - type: mrr_at_1 value: 21.891 - type: mrr_at_10 value: 31.041999999999998 - type: mrr_at_100 value: 32.092 - type: mrr_at_1000 value: 32.151999999999994 - type: mrr_at_3 value: 28.358 - type: mrr_at_5 value: 29.969 - type: ndcg_at_1 value: 21.891 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.531 - type: ndcg_at_1000 value: 40.256 - type: ndcg_at_3 value: 26.508 - type: ndcg_at_5 value: 28.894 - type: precision_at_1 value: 21.891 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 0.9990000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.769 - type: precision_at_5 value: 9.279 - type: recall_at_1 value: 17.848 - type: recall_at_10 value: 43.452 - type: recall_at_100 value: 69.216 - type: recall_at_1000 value: 88.102 - type: recall_at_3 value: 29.18 - type: recall_at_5 value: 35.347 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.94 - type: map_at_10 value: 41.248000000000005 - type: map_at_100 value: 42.495 - type: map_at_1000 value: 42.602000000000004 - type: map_at_3 value: 37.939 - type: map_at_5 value: 39.924 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.041 - type: mrr_at_100 value: 47.83 - type: mrr_at_1000 value: 47.878 - type: mrr_at_3 value: 44.466 - type: mrr_at_5 value: 46.111999999999995 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.223 - type: ndcg_at_100 value: 52.394 - type: ndcg_at_1000 value: 54.432 - type: ndcg_at_3 value: 42.032000000000004 - type: ndcg_at_5 value: 44.772 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.2890000000000001 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 19.698 - type: precision_at_5 value: 14.013 - type: recall_at_1 value: 30.94 - type: recall_at_10 value: 59.316 - type: recall_at_100 value: 80.783 - type: recall_at_1000 value: 94.15400000000001 - type: recall_at_3 value: 44.712 - type: recall_at_5 value: 51.932 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.104 - type: map_at_10 value: 36.675999999999995 - type: map_at_100 value: 38.076 - type: map_at_1000 value: 38.189 - type: map_at_3 value: 33.733999999999995 - type: map_at_5 value: 35.287 - type: mrr_at_1 value: 33.904 - type: mrr_at_10 value: 42.55 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.494 - type: mrr_at_3 value: 40.126 - type: mrr_at_5 value: 41.473 - type: ndcg_at_1 value: 33.904 - type: ndcg_at_10 value: 42.414 - type: ndcg_at_100 value: 48.203 - type: ndcg_at_1000 value: 50.437 - type: ndcg_at_3 value: 37.633 - type: ndcg_at_5 value: 39.67 - type: precision_at_1 value: 33.904 - type: precision_at_10 value: 7.82 - type: precision_at_100 value: 1.2409999999999999 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 17.884 - type: precision_at_5 value: 12.648000000000001 - type: recall_at_1 value: 27.104 - type: recall_at_10 value: 53.563 - type: recall_at_100 value: 78.557 - type: recall_at_1000 value: 93.533 - type: recall_at_3 value: 39.92 - type: recall_at_5 value: 45.457 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.707749999999997 - type: map_at_10 value: 36.961 - type: map_at_100 value: 38.158833333333334 - type: map_at_1000 value: 38.270333333333326 - type: map_at_3 value: 34.07183333333334 - type: map_at_5 value: 35.69533333333334 - type: mrr_at_1 value: 32.81875 - type: mrr_at_10 value: 41.293 - type: mrr_at_100 value: 42.116499999999995 - type: mrr_at_1000 value: 42.170249999999996 - type: mrr_at_3 value: 38.83983333333333 - type: mrr_at_5 value: 40.29775 - type: ndcg_at_1 value: 32.81875 - type: ndcg_at_10 value: 42.355 - type: ndcg_at_100 value: 47.41374999999999 - type: ndcg_at_1000 value: 49.5805 - type: ndcg_at_3 value: 37.52825 - type: ndcg_at_5 value: 39.83266666666667 - type: precision_at_1 value: 32.81875 - type: precision_at_10 value: 7.382416666666666 - type: precision_at_100 value: 1.1640833333333334 - type: precision_at_1000 value: 0.15383333333333335 - type: precision_at_3 value: 17.134166666666665 - type: precision_at_5 value: 12.174833333333336 - type: recall_at_1 value: 27.707749999999997 - type: recall_at_10 value: 53.945 - type: recall_at_100 value: 76.191 - type: recall_at_1000 value: 91.101 - type: recall_at_3 value: 40.39083333333334 - type: recall_at_5 value: 46.40083333333333 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.482 - type: map_at_10 value: 33.201 - type: map_at_100 value: 34.107 - type: map_at_1000 value: 34.197 - type: map_at_3 value: 31.174000000000003 - type: map_at_5 value: 32.279 - type: mrr_at_1 value: 29.908 - type: mrr_at_10 value: 36.235 - type: mrr_at_100 value: 37.04 - type: mrr_at_1000 value: 37.105 - type: mrr_at_3 value: 34.355999999999995 - type: mrr_at_5 value: 35.382999999999996 - type: ndcg_at_1 value: 29.908 - type: ndcg_at_10 value: 37.325 - type: ndcg_at_100 value: 41.795 - type: ndcg_at_1000 value: 44.105 - type: ndcg_at_3 value: 33.555 - type: ndcg_at_5 value: 35.266999999999996 - type: precision_at_1 value: 29.908 - type: precision_at_10 value: 5.721 - type: precision_at_100 value: 0.8630000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 14.008000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 26.482 - type: recall_at_10 value: 47.072 - type: recall_at_100 value: 67.27 - type: recall_at_1000 value: 84.371 - type: recall_at_3 value: 36.65 - type: recall_at_5 value: 40.774 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.815 - type: map_at_10 value: 26.369999999999997 - type: map_at_100 value: 27.458 - type: map_at_1000 value: 27.588 - type: map_at_3 value: 23.990000000000002 - type: map_at_5 value: 25.345000000000002 - type: mrr_at_1 value: 22.953000000000003 - type: mrr_at_10 value: 30.342999999999996 - type: mrr_at_100 value: 31.241000000000003 - type: mrr_at_1000 value: 31.319000000000003 - type: mrr_at_3 value: 28.16 - type: mrr_at_5 value: 29.406 - type: ndcg_at_1 value: 22.953000000000003 - type: ndcg_at_10 value: 31.151 - type: ndcg_at_100 value: 36.309000000000005 - type: ndcg_at_1000 value: 39.227000000000004 - type: ndcg_at_3 value: 26.921 - type: ndcg_at_5 value: 28.938000000000002 - type: precision_at_1 value: 22.953000000000003 - type: precision_at_10 value: 5.602 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 12.606 - type: precision_at_5 value: 9.119 - type: recall_at_1 value: 18.815 - type: recall_at_10 value: 41.574 - type: recall_at_100 value: 64.84400000000001 - type: recall_at_1000 value: 85.406 - type: recall_at_3 value: 29.694 - type: recall_at_5 value: 34.935 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.840999999999998 - type: map_at_10 value: 36.797999999999995 - type: map_at_100 value: 37.993 - type: map_at_1000 value: 38.086999999999996 - type: map_at_3 value: 34.050999999999995 - type: map_at_5 value: 35.379 - type: mrr_at_1 value: 32.649 - type: mrr_at_10 value: 41.025 - type: mrr_at_100 value: 41.878 - type: mrr_at_1000 value: 41.929 - type: mrr_at_3 value: 38.573 - type: mrr_at_5 value: 39.715 - type: ndcg_at_1 value: 32.649 - type: ndcg_at_10 value: 42.142 - type: ndcg_at_100 value: 47.558 - type: ndcg_at_1000 value: 49.643 - type: ndcg_at_3 value: 37.12 - type: ndcg_at_5 value: 38.983000000000004 - type: precision_at_1 value: 32.649 - type: precision_at_10 value: 7.08 - type: precision_at_100 value: 1.1039999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.698 - type: precision_at_5 value: 11.511000000000001 - type: recall_at_1 value: 27.840999999999998 - type: recall_at_10 value: 54.245 - type: recall_at_100 value: 77.947 - type: recall_at_1000 value: 92.36999999999999 - type: recall_at_3 value: 40.146 - type: recall_at_5 value: 44.951 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.529000000000003 - type: map_at_10 value: 35.010000000000005 - type: map_at_100 value: 36.647 - type: map_at_1000 value: 36.857 - type: map_at_3 value: 31.968000000000004 - type: map_at_5 value: 33.554 - type: mrr_at_1 value: 31.818 - type: mrr_at_10 value: 39.550999999999995 - type: mrr_at_100 value: 40.54 - type: mrr_at_1000 value: 40.596 - type: mrr_at_3 value: 36.726 - type: mrr_at_5 value: 38.416 - type: ndcg_at_1 value: 31.818 - type: ndcg_at_10 value: 40.675 - type: ndcg_at_100 value: 46.548 - type: ndcg_at_1000 value: 49.126 - type: ndcg_at_3 value: 35.829 - type: ndcg_at_5 value: 38.0 - type: precision_at_1 value: 31.818 - type: precision_at_10 value: 7.826 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 16.601 - type: precision_at_5 value: 12.095 - type: recall_at_1 value: 26.529000000000003 - type: recall_at_10 value: 51.03 - type: recall_at_100 value: 77.556 - type: recall_at_1000 value: 93.804 - type: recall_at_3 value: 36.986000000000004 - type: recall_at_5 value: 43.096000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.480999999999998 - type: map_at_10 value: 30.817 - type: map_at_100 value: 31.838 - type: map_at_1000 value: 31.932 - type: map_at_3 value: 28.011999999999997 - type: map_at_5 value: 29.668 - type: mrr_at_1 value: 25.323 - type: mrr_at_10 value: 33.072 - type: mrr_at_100 value: 33.926 - type: mrr_at_1000 value: 33.993 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 32.092 - type: ndcg_at_1 value: 25.323 - type: ndcg_at_10 value: 35.514 - type: ndcg_at_100 value: 40.489000000000004 - type: ndcg_at_1000 value: 42.908 - type: ndcg_at_3 value: 30.092000000000002 - type: ndcg_at_5 value: 32.989000000000004 - type: precision_at_1 value: 25.323 - type: precision_at_10 value: 5.545 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 9.131 - type: recall_at_1 value: 23.480999999999998 - type: recall_at_10 value: 47.825 - type: recall_at_100 value: 70.652 - type: recall_at_1000 value: 88.612 - type: recall_at_3 value: 33.537 - type: recall_at_5 value: 40.542 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 13.333999999999998 - type: map_at_10 value: 22.524 - type: map_at_100 value: 24.506 - type: map_at_1000 value: 24.715 - type: map_at_3 value: 19.022 - type: map_at_5 value: 20.693 - type: mrr_at_1 value: 29.186 - type: mrr_at_10 value: 41.22 - type: mrr_at_100 value: 42.16 - type: mrr_at_1000 value: 42.192 - type: mrr_at_3 value: 38.013000000000005 - type: mrr_at_5 value: 39.704 - type: ndcg_at_1 value: 29.186 - type: ndcg_at_10 value: 31.167 - type: ndcg_at_100 value: 38.879000000000005 - type: ndcg_at_1000 value: 42.376000000000005 - type: ndcg_at_3 value: 25.817 - type: ndcg_at_5 value: 27.377000000000002 - type: precision_at_1 value: 29.186 - type: precision_at_10 value: 9.693999999999999 - type: precision_at_100 value: 1.8030000000000002 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 19.11 - type: precision_at_5 value: 14.344999999999999 - type: recall_at_1 value: 13.333999999999998 - type: recall_at_10 value: 37.092000000000006 - type: recall_at_100 value: 63.651 - type: recall_at_1000 value: 83.05 - type: recall_at_3 value: 23.74 - type: recall_at_5 value: 28.655 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.151 - type: map_at_10 value: 19.653000000000002 - type: map_at_100 value: 28.053 - type: map_at_1000 value: 29.709000000000003 - type: map_at_3 value: 14.191 - type: map_at_5 value: 16.456 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.4 - type: mrr_at_100 value: 74.715 - type: mrr_at_1000 value: 74.726 - type: mrr_at_3 value: 72.417 - type: mrr_at_5 value: 73.667 - type: ndcg_at_1 value: 54.25 - type: ndcg_at_10 value: 40.77 - type: ndcg_at_100 value: 46.359 - type: ndcg_at_1000 value: 54.193000000000005 - type: ndcg_at_3 value: 44.832 - type: ndcg_at_5 value: 42.63 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 32.175 - type: precision_at_100 value: 10.668 - type: precision_at_1000 value: 2.067 - type: precision_at_3 value: 47.667 - type: precision_at_5 value: 41.3 - type: recall_at_1 value: 9.151 - type: recall_at_10 value: 25.003999999999998 - type: recall_at_100 value: 52.976 - type: recall_at_1000 value: 78.315 - type: recall_at_3 value: 15.487 - type: recall_at_5 value: 18.999 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.89999999999999 - type: f1 value: 46.47777925067403 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 73.706 - type: map_at_10 value: 82.423 - type: map_at_100 value: 82.67999999999999 - type: map_at_1000 value: 82.694 - type: map_at_3 value: 81.328 - type: map_at_5 value: 82.001 - type: mrr_at_1 value: 79.613 - type: mrr_at_10 value: 87.07000000000001 - type: mrr_at_100 value: 87.169 - type: mrr_at_1000 value: 87.17 - type: mrr_at_3 value: 86.404 - type: mrr_at_5 value: 86.856 - type: ndcg_at_1 value: 79.613 - type: ndcg_at_10 value: 86.289 - type: ndcg_at_100 value: 87.201 - type: ndcg_at_1000 value: 87.428 - type: ndcg_at_3 value: 84.625 - type: ndcg_at_5 value: 85.53699999999999 - type: precision_at_1 value: 79.613 - type: precision_at_10 value: 10.399 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.473 - type: precision_at_5 value: 20.132 - type: recall_at_1 value: 73.706 - type: recall_at_10 value: 93.559 - type: recall_at_100 value: 97.188 - type: recall_at_1000 value: 98.555 - type: recall_at_3 value: 88.98700000000001 - type: recall_at_5 value: 91.373 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 19.841 - type: map_at_10 value: 32.643 - type: map_at_100 value: 34.575 - type: map_at_1000 value: 34.736 - type: map_at_3 value: 28.317999999999998 - type: map_at_5 value: 30.964000000000002 - type: mrr_at_1 value: 39.660000000000004 - type: mrr_at_10 value: 48.620000000000005 - type: mrr_at_100 value: 49.384 - type: mrr_at_1000 value: 49.415 - type: mrr_at_3 value: 45.988 - type: mrr_at_5 value: 47.361 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 40.646 - type: ndcg_at_100 value: 47.657 - type: ndcg_at_1000 value: 50.428 - type: ndcg_at_3 value: 36.689 - type: ndcg_at_5 value: 38.211 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 11.235000000000001 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 24.587999999999997 - type: precision_at_5 value: 18.395 - type: recall_at_1 value: 19.841 - type: recall_at_10 value: 48.135 - type: recall_at_100 value: 74.224 - type: recall_at_1000 value: 90.826 - type: recall_at_3 value: 33.536 - type: recall_at_5 value: 40.311 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 40.358 - type: map_at_10 value: 64.497 - type: map_at_100 value: 65.362 - type: map_at_1000 value: 65.41900000000001 - type: map_at_3 value: 61.06700000000001 - type: map_at_5 value: 63.317 - type: mrr_at_1 value: 80.716 - type: mrr_at_10 value: 86.10799999999999 - type: mrr_at_100 value: 86.265 - type: mrr_at_1000 value: 86.27 - type: mrr_at_3 value: 85.271 - type: mrr_at_5 value: 85.82499999999999 - type: ndcg_at_1 value: 80.716 - type: ndcg_at_10 value: 72.597 - type: ndcg_at_100 value: 75.549 - type: ndcg_at_1000 value: 76.61 - type: ndcg_at_3 value: 67.874 - type: ndcg_at_5 value: 70.655 - type: precision_at_1 value: 80.716 - type: precision_at_10 value: 15.148 - type: precision_at_100 value: 1.745 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 43.597 - type: precision_at_5 value: 28.351 - type: recall_at_1 value: 40.358 - type: recall_at_10 value: 75.739 - type: recall_at_100 value: 87.259 - type: recall_at_1000 value: 94.234 - type: recall_at_3 value: 65.39500000000001 - type: recall_at_5 value: 70.878 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.80799999999998 - type: ap value: 86.81350378180757 - type: f1 value: 90.79901248314215 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.096 - type: map_at_10 value: 34.384 - type: map_at_100 value: 35.541 - type: map_at_1000 value: 35.589999999999996 - type: map_at_3 value: 30.496000000000002 - type: map_at_5 value: 32.718 - type: mrr_at_1 value: 22.750999999999998 - type: mrr_at_10 value: 35.024 - type: mrr_at_100 value: 36.125 - type: mrr_at_1000 value: 36.168 - type: mrr_at_3 value: 31.225 - type: mrr_at_5 value: 33.416000000000004 - type: ndcg_at_1 value: 22.750999999999998 - type: ndcg_at_10 value: 41.351 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 48.111 - type: ndcg_at_3 value: 33.439 - type: ndcg_at_5 value: 37.407000000000004 - type: precision_at_1 value: 22.750999999999998 - type: precision_at_10 value: 6.564 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.288 - type: precision_at_5 value: 10.581999999999999 - type: recall_at_1 value: 22.096 - type: recall_at_10 value: 62.771 - type: recall_at_100 value: 88.529 - type: recall_at_1000 value: 97.55 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.788 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.16780665754673 - type: f1 value: 93.96331194859894 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.90606475148198 - type: f1 value: 58.58344986604187 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.14660390047075 - type: f1 value: 74.31533923533614 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.16139878950908 - type: f1 value: 80.18532656824924 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.949880906135085 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.56300351524862 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.196521894371315 - type: mrr value: 32.22644231694389 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.783 - type: map_at_10 value: 14.549000000000001 - type: map_at_100 value: 18.433 - type: map_at_1000 value: 19.949 - type: map_at_3 value: 10.936 - type: map_at_5 value: 12.514 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.42 - type: mrr_at_100 value: 56.908 - type: mrr_at_1000 value: 56.95 - type: mrr_at_3 value: 54.283 - type: mrr_at_5 value: 55.568 - type: ndcg_at_1 value: 45.666000000000004 - type: ndcg_at_10 value: 37.389 - type: ndcg_at_100 value: 34.253 - type: ndcg_at_1000 value: 43.059999999999995 - type: ndcg_at_3 value: 42.725 - type: ndcg_at_5 value: 40.193 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 27.988000000000003 - type: precision_at_100 value: 8.672 - type: precision_at_1000 value: 2.164 - type: precision_at_3 value: 40.248 - type: precision_at_5 value: 34.737 - type: recall_at_1 value: 6.783 - type: recall_at_10 value: 17.838 - type: recall_at_100 value: 33.672000000000004 - type: recall_at_1000 value: 66.166 - type: recall_at_3 value: 11.849 - type: recall_at_5 value: 14.205000000000002 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 31.698999999999998 - type: map_at_10 value: 46.556 - type: map_at_100 value: 47.652 - type: map_at_1000 value: 47.68 - type: map_at_3 value: 42.492000000000004 - type: map_at_5 value: 44.763999999999996 - type: mrr_at_1 value: 35.747 - type: mrr_at_10 value: 49.242999999999995 - type: mrr_at_100 value: 50.052 - type: mrr_at_1000 value: 50.068 - type: mrr_at_3 value: 45.867000000000004 - type: mrr_at_5 value: 47.778999999999996 - type: ndcg_at_1 value: 35.717999999999996 - type: ndcg_at_10 value: 54.14600000000001 - type: ndcg_at_100 value: 58.672999999999995 - type: ndcg_at_1000 value: 59.279 - type: ndcg_at_3 value: 46.407 - type: ndcg_at_5 value: 50.181 - type: precision_at_1 value: 35.717999999999996 - type: precision_at_10 value: 8.844000000000001 - type: precision_at_100 value: 1.139 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 20.993000000000002 - type: precision_at_5 value: 14.791000000000002 - type: recall_at_1 value: 31.698999999999998 - type: recall_at_10 value: 74.693 - type: recall_at_100 value: 94.15299999999999 - type: recall_at_1000 value: 98.585 - type: recall_at_3 value: 54.388999999999996 - type: recall_at_5 value: 63.08200000000001 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.283 - type: map_at_10 value: 85.24000000000001 - type: map_at_100 value: 85.882 - type: map_at_1000 value: 85.897 - type: map_at_3 value: 82.326 - type: map_at_5 value: 84.177 - type: mrr_at_1 value: 82.21000000000001 - type: mrr_at_10 value: 88.228 - type: mrr_at_100 value: 88.32 - type: mrr_at_1000 value: 88.32 - type: mrr_at_3 value: 87.323 - type: mrr_at_5 value: 87.94800000000001 - type: ndcg_at_1 value: 82.17999999999999 - type: ndcg_at_10 value: 88.9 - type: ndcg_at_100 value: 90.079 - type: ndcg_at_1000 value: 90.158 - type: ndcg_at_3 value: 86.18299999999999 - type: ndcg_at_5 value: 87.71799999999999 - type: precision_at_1 value: 82.17999999999999 - type: precision_at_10 value: 13.464 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.693 - type: precision_at_5 value: 24.792 - type: recall_at_1 value: 71.283 - type: recall_at_10 value: 95.742 - type: recall_at_100 value: 99.67200000000001 - type: recall_at_1000 value: 99.981 - type: recall_at_3 value: 87.888 - type: recall_at_5 value: 92.24 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.24267063669042 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.88056988932578 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.903 - type: map_at_10 value: 13.202 - type: map_at_100 value: 15.5 - type: map_at_1000 value: 15.870999999999999 - type: map_at_3 value: 9.407 - type: map_at_5 value: 11.238 - type: mrr_at_1 value: 24.2 - type: mrr_at_10 value: 35.867 - type: mrr_at_100 value: 37.001 - type: mrr_at_1000 value: 37.043 - type: mrr_at_3 value: 32.5 - type: mrr_at_5 value: 34.35 - type: ndcg_at_1 value: 24.2 - type: ndcg_at_10 value: 21.731 - type: ndcg_at_100 value: 30.7 - type: ndcg_at_1000 value: 36.618 - type: ndcg_at_3 value: 20.72 - type: ndcg_at_5 value: 17.954 - type: precision_at_1 value: 24.2 - type: precision_at_10 value: 11.33 - type: precision_at_100 value: 2.4410000000000003 - type: precision_at_1000 value: 0.386 - type: precision_at_3 value: 19.667 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 4.903 - type: recall_at_10 value: 22.962 - type: recall_at_100 value: 49.563 - type: recall_at_1000 value: 78.238 - type: recall_at_3 value: 11.953 - type: recall_at_5 value: 16.067999999999998 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.12694254604078 - type: cos_sim_spearman value: 80.30141815181918 - type: euclidean_pearson value: 81.34015449877128 - type: euclidean_spearman value: 80.13984197010849 - type: manhattan_pearson value: 81.31767068124086 - type: manhattan_spearman value: 80.11720513114103 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.13112984010417 - type: cos_sim_spearman value: 78.03063573402875 - type: euclidean_pearson value: 83.51928418844804 - type: euclidean_spearman value: 78.4045235411144 - type: manhattan_pearson value: 83.49981637388689 - type: manhattan_spearman value: 78.4042575139372 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.50327987379504 - type: cos_sim_spearman value: 84.18556767756205 - type: euclidean_pearson value: 82.69684424327679 - type: euclidean_spearman value: 83.5368106038335 - type: manhattan_pearson value: 82.57967581007374 - type: manhattan_spearman value: 83.43009053133697 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.50756863007814 - type: cos_sim_spearman value: 82.27204331279108 - type: euclidean_pearson value: 81.39535251429741 - type: euclidean_spearman value: 81.84386626336239 - type: manhattan_pearson value: 81.34281737280695 - type: manhattan_spearman value: 81.81149375673166 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.8727714856726 - type: cos_sim_spearman value: 87.95738287792312 - type: euclidean_pearson value: 86.62920602795887 - type: euclidean_spearman value: 87.05207355381243 - type: manhattan_pearson value: 86.53587918472225 - type: manhattan_spearman value: 86.95382961029586 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.52240359769479 - type: cos_sim_spearman value: 85.47685776238286 - type: euclidean_pearson value: 84.25815333483058 - type: euclidean_spearman value: 85.27415639683198 - type: manhattan_pearson value: 84.29127757025637 - type: manhattan_spearman value: 85.30226224917351 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.42501708915708 - type: cos_sim_spearman value: 86.42276182795041 - type: euclidean_pearson value: 86.5408207354761 - type: euclidean_spearman value: 85.46096321750838 - type: manhattan_pearson value: 86.54177303026881 - type: manhattan_spearman value: 85.50313151916117 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.86521089250766 - type: cos_sim_spearman value: 65.94868540323003 - type: euclidean_pearson value: 67.16569626533084 - type: euclidean_spearman value: 66.37667004134917 - type: manhattan_pearson value: 67.1482365102333 - type: manhattan_spearman value: 66.53240122580029 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.64746265365318 - type: cos_sim_spearman value: 86.41888825906786 - type: euclidean_pearson value: 85.27453642725811 - type: euclidean_spearman value: 85.94095796602544 - type: manhattan_pearson value: 85.28643660505334 - type: manhattan_spearman value: 85.95028003260744 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.48903153618527 - type: mrr value: 96.41081503826601 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 58.594 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.782 - type: map_at_1000 value: 69.795 - type: map_at_3 value: 66.23 - type: map_at_5 value: 68.293 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.339 - type: mrr_at_100 value: 70.708 - type: mrr_at_1000 value: 70.722 - type: mrr_at_3 value: 68.0 - type: mrr_at_5 value: 69.56700000000001 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 74.039 - type: ndcg_at_100 value: 76.103 - type: ndcg_at_1000 value: 76.47800000000001 - type: ndcg_at_3 value: 68.967 - type: ndcg_at_5 value: 71.96900000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.866999999999999 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.2 - type: recall_at_1 value: 58.594 - type: recall_at_10 value: 87.422 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.217 - type: recall_at_5 value: 81.539 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85049504950496 - type: cos_sim_ap value: 96.33111544137081 - type: cos_sim_f1 value: 92.35443037974684 - type: cos_sim_precision value: 93.53846153846153 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.82376237623762 - type: dot_ap value: 95.38082527310888 - type: dot_f1 value: 90.90909090909092 - type: dot_precision value: 92.90187891440502 - type: dot_recall value: 89.0 - type: euclidean_accuracy value: 99.84851485148515 - type: euclidean_ap value: 96.32316003996347 - type: euclidean_f1 value: 92.2071392659628 - type: euclidean_precision value: 92.71991911021233 - type: euclidean_recall value: 91.7 - type: manhattan_accuracy value: 99.84851485148515 - type: manhattan_ap value: 96.3655668249217 - type: manhattan_f1 value: 92.18356026222895 - type: manhattan_precision value: 92.98067141403867 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85049504950496 - type: max_ap value: 96.3655668249217 - type: max_f1 value: 92.35443037974684 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.94861371629051 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.009430451385 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.61164066427969 - type: mrr value: 55.49710603938544 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.622620124907662 - type: cos_sim_spearman value: 31.0678351356163 - type: dot_pearson value: 30.863727693306814 - type: dot_spearman value: 31.230306567021255 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 2.011 - type: map_at_100 value: 10.974 - type: map_at_1000 value: 25.819 - type: map_at_3 value: 0.6649999999999999 - type: map_at_5 value: 1.076 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 78.07300000000001 - type: ndcg_at_100 value: 58.231 - type: ndcg_at_1000 value: 51.153000000000006 - type: ndcg_at_3 value: 81.123 - type: ndcg_at_5 value: 81.059 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 83.0 - type: precision_at_100 value: 59.38 - type: precision_at_1000 value: 22.55 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.2079999999999997 - type: recall_at_100 value: 14.069 - type: recall_at_1000 value: 47.678 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.161 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.809 - type: map_at_10 value: 10.394 - type: map_at_100 value: 16.598 - type: map_at_1000 value: 18.142 - type: map_at_3 value: 5.572 - type: map_at_5 value: 7.1370000000000005 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 46.564 - type: mrr_at_100 value: 47.469 - type: mrr_at_1000 value: 47.469 - type: mrr_at_3 value: 42.177 - type: mrr_at_5 value: 44.524 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 25.701 - type: ndcg_at_100 value: 37.532 - type: ndcg_at_1000 value: 48.757 - type: ndcg_at_3 value: 28.199999999999996 - type: ndcg_at_5 value: 25.987 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.9799999999999995 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.809 - type: recall_at_10 value: 16.887 - type: recall_at_100 value: 48.67 - type: recall_at_1000 value: 82.89699999999999 - type: recall_at_3 value: 6.521000000000001 - type: recall_at_5 value: 9.609 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.57860000000001 - type: ap value: 13.82629211536393 - type: f1 value: 54.59860966183956 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.38030560271647 - type: f1 value: 59.69685552567865 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.4736717043405 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.92853311080646 - type: cos_sim_ap value: 77.67872502591382 - type: cos_sim_f1 value: 70.33941236068895 - type: cos_sim_precision value: 67.63273258645884 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 85.79603027954938 - type: dot_ap value: 73.73786190233379 - type: dot_f1 value: 67.3437901774235 - type: dot_precision value: 65.67201604814443 - type: dot_recall value: 69.10290237467018 - type: euclidean_accuracy value: 86.94045419324074 - type: euclidean_ap value: 77.6687791535167 - type: euclidean_f1 value: 70.47209214023542 - type: euclidean_precision value: 67.7207492094381 - type: euclidean_recall value: 73.45646437994723 - type: manhattan_accuracy value: 86.87488823985218 - type: manhattan_ap value: 77.63373392430728 - type: manhattan_f1 value: 70.40920716112532 - type: manhattan_precision value: 68.31265508684864 - type: manhattan_recall value: 72.63852242744063 - type: max_accuracy value: 86.94045419324074 - type: max_ap value: 77.67872502591382 - type: max_f1 value: 70.47209214023542 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.67155664221679 - type: cos_sim_ap value: 85.64591703003417 - type: cos_sim_f1 value: 77.59531005352656 - type: cos_sim_precision value: 73.60967184801382 - type: cos_sim_recall value: 82.03726516784724 - type: dot_accuracy value: 88.41541506578181 - type: dot_ap value: 84.6482788957769 - type: dot_f1 value: 77.04748541466657 - type: dot_precision value: 74.02440754931176 - type: dot_recall value: 80.3279950723745 - type: euclidean_accuracy value: 88.63080684596576 - type: euclidean_ap value: 85.44570045321562 - type: euclidean_f1 value: 77.28769403336106 - type: euclidean_precision value: 72.90600040958427 - type: euclidean_recall value: 82.22975053895904 - type: manhattan_accuracy value: 88.59393798269105 - type: manhattan_ap value: 85.40271361038187 - type: manhattan_f1 value: 77.17606419344392 - type: manhattan_precision value: 72.4447747078295 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 88.67155664221679 - type: max_ap value: 85.64591703003417 - type: max_f1 value: 77.59531005352656 license: mit language: - en --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [ไธญๆ–‡](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size ๐Ÿค—** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["ๆ ทไพ‹ๆ•ฐๆฎ-1", "ๆ ทไพ‹ๆ•ฐๆฎ-2"] sentences_2 = ["ๆ ทไพ‹ๆ•ฐๆฎ-3", "ๆ ทไพ‹ๆ•ฐๆฎ-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["ๆ ทไพ‹ๆ–‡ๆกฃ-1", "ๆ ทไพ‹ๆ–‡ๆกฃ-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["ๆ ทไพ‹ๆ•ฐๆฎ-1", "ๆ ทไพ‹ๆ•ฐๆฎ-2"] sentences_2 = ["ๆ ทไพ‹ๆ•ฐๆฎ-3", "ๆ ทไพ‹ๆ•ฐๆฎ-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["ๆ ทไพ‹ๆ–‡ๆกฃ-1", "ๆ ทไพ‹ๆ–‡ๆกฃ-2"] instruction = "ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš" ) model.query_instruction = "ไธบ่ฟ™ไธชๅฅๅญ็”Ÿๆˆ่กจ็คบไปฅ็”จไบŽๆฃ€็ดข็›ธๅ…ณๆ–‡็ซ ๏ผš" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["ๆ ทไพ‹ๆ•ฐๆฎ-1", "ๆ ทไพ‹ๆ•ฐๆฎ-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13") model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["ๆ ทไพ‹ๆ•ฐๆฎ-1", "ๆ ทไพ‹ๆ•ฐๆฎ-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` #### Usage via infinity Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
QuantFactory/Oumuamua-7b-instruct-v2-GGUF
QuantFactory
"2024-06-18T16:42:15Z"
2,871
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "ja", "en", "arxiv:2403.19522", "base_model:nitky/Oumuamua-7b-instruct-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-15T17:17:56Z"
--- base_model: nitky/Oumuamua-7b-instruct-v2 library_name: transformers tags: - mergekit - merge language: - ja - en pipeline_tag: text-generation license: apache-2.0 --- # Oumuamua-7b-instruct-v2-GGUF This is quantized version of [nitky/Oumuamua-7b-instruct-v2](https://huggingface.co/nitky/Oumuamua-7b-instruct-v2) created using llama.cpp # Model Description ๐Ÿšจ If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/lNEtKjTTI-Ay28oAk2Wn8.png) - Understanding user intent in messages - Providing helpful and harmless responses - Enhancing expertise in human and computer-related topics - Improving role-playing capabilities - Enhancing performance in multi-turn conversations This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Output example - temparture: 0.1 - top_p: 1.0 ``` [INST]<<SYS>> ใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใฎใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚ <</SYS>> ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ข๏ผˆOumuamua๏ผ‰ใฏใ€2017ๅนดใซ็™บ่ฆ‹ใ•ใ‚ŒใŸใ€ๅคฉไฝ“่ฆณๆธฌๅฒไธŠๅˆใ‚ใฆๅคช้™ฝ็ณปๅค–ใ‹ใ‚‰้ฃ›ๆฅใ—ใŸๆ’ๆ˜Ÿ้–“ๅคฉไฝ“ใงใ™ใ€‚ใ“ใ‚Œใ‚’ใƒ†ใƒผใƒžใซใ—ใŸๅฐ่ชฌใฎ่จญๅฎšใ‚’่€ƒใˆใฆใใ ใ•ใ„ใ€‚ [/INST] ใ‚ฟใ‚คใƒˆใƒซ:ใ€Œใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ้บ็”ฃใ€ ใ‚ใ‚‰ใ™ใ˜: 2017ๅนดใ€ๅคฉไฝ“่ฆณๆธฌๅฒไธŠๅˆใฎๅคช้™ฝ็ณปๅค–ใ‹ใ‚‰ใฎ้ฃ›ๆฅ็‰ฉไฝ“ใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใŒ็™บ่ฆ‹ใ•ใ‚ŒใŸใ€‚ใ“ใฎๅคฉไฝ“ใฏใ€่ฌŽใซๅŒ…ใพใ‚ŒใŸ่ตทๆบใจ็›ฎ็š„ใ‚’ๆŒใคใ“ใจใ‹ใ‚‰ใ€ไธ–็•Œไธญใฎ็ง‘ๅญฆ่€…ใ‚„็ ”็ฉถ่€…ใŒใใฎ่งฃๆ˜Žใซ็†ฑใ‚’ๅ…ฅใ‚ŒใŸใ€‚ไธปไบบๅ…ฌใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็ ”็ฉถใซๆƒ…็†ฑใ‚’ๆณจใ่‹ฅใๅคฉๆ–‡ๅญฆ่€…ใ€ใƒŸใ‚ญใƒปใ‚ตใƒˆใ‚ฆใ€‚ๅฝผใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใŒ็™บใ™ใ‚‹ๅฅ‡ๅฆ™ใชไฟกๅทใ‚’ๅถ็„ถใซใ‚‚่งฃ่ชญใ—ใ€ใใ‚ŒใŒ้ซ˜ๅบฆใชๆ–‡ๆ˜Žใซใ‚ˆใฃใฆไฝœใ‚‰ใ‚ŒใŸใƒกใƒƒใ‚ปใƒผใ‚ธใงใ‚ใ‚‹ใ“ใจใ‚’็ชใๆญขใ‚ใ‚‹ใ€‚ ็‰ฉ่ชžใฏใ€ใƒŸใ‚ญใŒใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็ ”็ฉถใ‚’้€ฒใ‚ใ‚‹ไธญใงใ€ๆฌกใ€…ใจๆ˜Žใ‚‰ใ‹ใซใชใ‚‹้ฉšๆ„•ใฎไบ‹ๅฎŸใซ่ฟซใ‚‰ใ‚Œใฆใ„ใใ€‚ใใฎไฟกๅทใฏใ€้ ใ้›ขใ‚ŒใŸ้Š€ๆฒณ็ณปใฎๆ˜Ÿ็ณปใ‹ใ‚‰ใฎSOSไฟกๅทใงใ‚ใ‚Šใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฏๅฑๆฉŸใซ็€•ใ—ใŸๆ˜Ÿ็ณปใ‚’ๆ•‘ใ†ใŸใ‚ใซ้ฃ›ๆฅใ—ใŸๆ•‘ๅŠฉ่ˆนใงใ‚ใ‚‹ใ“ใจใŒๅˆคๆ˜Žใ™ใ‚‹ใ€‚ใƒŸใ‚ญใฏใ€ใ“ใฎไบ‹ๅฎŸใ‚’ไธ–็•Œใซๅ…ฌ่กจใ—ใ‚ˆใ†ใจใ™ใ‚‹ใŒใ€ๆ”ฟๅบœใ‚„่ปใฎไป‹ๅ…ฅใ€ใใ—ใฆ่ฌŽใฎ็ต„็น”ใฎๅฆจๅฎณใซ้ญใ†ใ€‚ ไธ€ๆ–นใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใซใฏใ€ๅทจๅคงใชใ‚จใƒใƒซใ‚ฎใƒผๆบใจใชใ‚‹ใ‚ฏใƒชใ‚นใ‚ฟใƒซใŒๆญ่ผ‰ใ•ใ‚ŒใฆใŠใ‚Šใ€ใ“ใฎใ‚ฏใƒชใ‚นใ‚ฟใƒซใ‚’็‹™ใ†่€…ใŸใกใŒ็พใ‚ŒใŸใ€‚ใƒŸใ‚ญใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใ‚’ๅฎˆใ‚‹ใŸใ‚ใซใ€ๅŒใ˜ใใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ่ฌŽใ‚’่ฟฝใ†ๅฎ‡ๅฎ™้ฃ›่กŒๅฃซใฎใƒชใƒŠใƒปใƒคใƒžใ‚ฐใƒใจๆ‰‹ใ‚’็ต„ใ‚€ใ€‚ไบŒไบบใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็ง˜ๅฏ†ใ‚’ๅฎˆใ‚ŠใชใŒใ‚‰ใ€ๆ˜Ÿ็ณปใ‚’ๆ•‘ใ†ใŸใ‚ใฎๆ—…ใซๅ‡บใ‚‹ใ€‚ ๆ—…ใฎ้€”ไธญใ€ใƒŸใ‚ญใจใƒชใƒŠใฏใ€ๆง˜ใ€…ใชๆ˜Ÿ็ณปใ‚’่จชใ‚Œใ€ใใ“ใงๅ‡บไผšใ†็•ฐๆ˜ŸไบบใŸใกใจใฎไบคๆตใ‚„ใ€ๅฎ‡ๅฎ™ใฎๅฃฎๅคงใชๆ™ฏ่‰ฒใ‚’็›ฎใฎๅฝ“ใŸใ‚Šใซใ™ใ‚‹ใ€‚ใ—ใ‹ใ—ใ€ๅฝผใ‚‰ใฎๅ‰ใซใฏใ€ๅผทๅคงใชๆ•ตใ‚„ๅ›ฐ้›ฃใŒ็ซ‹ใกใฏใ ใ‹ใ‚‹ใ€‚ใƒŸใ‚ญใจใƒชใƒŠใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎไฝฟๅ‘ฝใ‚’ๆžœใŸใ™ใŸใ‚ใซใ€ๅŠ›ใ‚’ๅˆใ‚ใ›ใฆ็ซ‹ใกๅ‘ใ‹ใ†ใ€‚ ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ้บ็”ฃใจใฏไฝ•ใ‹ใ€ใใ—ใฆใƒŸใ‚ญใจใƒชใƒŠใฏใ€ๆ˜Ÿ็ณปใ‚’ๆ•‘ใ†ใ“ใจใŒใงใใ‚‹ใฎใ‹ใ€‚ๅฎ‡ๅฎ™ใ‚’่ˆžๅฐใซใ—ใŸใ€ใ‚ตใ‚นใƒšใƒณใ‚นใจๅ†’้™บใ€ใใ—ใฆๅ‹ๆƒ…ใจๆ„›ใฎ็‰ฉ่ชžใ€‚ ไธปใช็™ปๅ ดไบบ็‰ฉ: - ใƒŸใ‚ญใƒปใ‚ตใƒˆใ‚ฆ: ่‹ฅใๅคฉๆ–‡ๅญฆ่€…ใ€‚ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็ ”็ฉถใซๆƒ…็†ฑใ‚’ๆณจใใ€‚ - ใƒชใƒŠใƒปใƒคใƒžใ‚ฐใƒ: ๅฎ‡ๅฎ™้ฃ›่กŒๅฃซใ€‚ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ่ฌŽใ‚’่ฟฝใ†ไธญใงใƒŸใ‚ญใจๅ‡บไผšใ†ใ€‚ - ใ‚จใƒชใƒƒใ‚ฏใƒปใ‚ซใƒผใ‚ฟใƒผ: ๆ”ฟๅบœใฎ้ซ˜ๅฎ˜ใ€‚ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็ง˜ๅฏ†ใ‚’ๅˆฉ็”จใ—ใ‚ˆใ†ใจใ™ใ‚‹ใ€‚ - ใ‚ถใƒฉใƒปใƒดใ‚กใƒฌใƒณใ‚ฟใ‚คใƒณ: ่ฌŽใฎ็ต„็น”ใฎใƒชใƒผใƒ€ใƒผใ€‚ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใ‚’็‹™ใ†็†็”ฑใจใฏใ€‚ - ใ‚ขใ‚ถใƒ : ้Š€ๆฒณ็ณปใฎๆ˜Ÿ็ณปใ‹ใ‚‰ใฎไฝฟ่€…ใ€‚ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็œŸใฎ็›ฎ็š„ใ‚’็Ÿฅใ‚‹ใ€‚ ่จญๅฎšใฎใƒใ‚คใƒณใƒˆ: - ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใ‚’ใƒ†ใƒผใƒžใซใ—ใŸSFๅฐ่ชฌใงใ€ๅคฉไฝ“่ฆณๆธฌใ‚„ๅฎ‡ๅฎ™ๆ—…่กŒใฎๆๅ†™ใ‚’่ฑŠๅฏŒใซๅ–ใ‚Šๅ…ฅใ‚Œใ‚‹ใ€‚ - ไธปไบบๅ…ฌใŸใกใŒใ€ๆง˜ใ€…ใชๆ˜Ÿ็ณปใ‚’ๆ—…ใ—ใ€ใใ“ใงๅ‡บไผšใ†็•ฐๆ˜Ÿไบบใ‚„ๆ–‡ๅŒ–ใ‚’ๆใใ“ใจใงใ€ๅฎ‡ๅฎ™ใฎๅคšๆง˜ๆ€งใ‚’่กจ็พใ€‚ - ๆ”ฟๅบœใ‚„่ปใ€่ฌŽใฎ็ต„็น”ใจใฎๅฏพ็ซ‹ใ‚’้€šใ—ใฆใ€ไบบ้กžใฎๆฌฒๆœ›ใ‚„ใ‚จใ‚ดใ‚’ๆใใ€‚ - ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใซๆญ่ผ‰ใ•ใ‚ŒใŸใ‚ฏใƒชใ‚นใ‚ฟใƒซใ‚’ใ‚ใใ‚‹ไบ‰ๅฅชๆˆฆใ‚„ใ€ๆ˜Ÿ็ณปใ‚’ๆ•‘ใ†ใŸใ‚ใฎใƒŸใƒƒใ‚ทใƒงใƒณใ‚’ไธญๅฟƒใซใ€ใ‚ตใ‚นใƒšใƒณใ‚นใจๅ†’้™บใ‚’็››ใ‚Š่พผใ‚€ใ€‚ - ใƒŸใ‚ญใจใƒชใƒŠใฎๅ‹ๆƒ…ใ‚„ๆ„›ใ‚’ๆใ„ใŸใ€ไบบ้–“ใƒ‰ใƒฉใƒžใ‚‚้‡่ฆใช่ฆ็ด ใจใ™ใ‚‹ใ€‚ - ๆœ€ๅพŒใซใฏใ€ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ขใฎ็œŸใฎ็›ฎ็š„ใŒๆ˜Žใ‹ใ•ใ‚Œใ€่ชญ่€…ใซๆทฑใ„ๆ„Ÿๅ‹•ใ‚’ไธŽใˆใ‚‹ใ€‚ ``` ### RP example - temparture: 0.3 - top_p: 0.95 ``` [INST]<<SYS>> ใƒญใƒผใƒซใƒ—ใƒฌใ‚คใ‚’ๅง‹ใ‚ใพใ™ใ€‚ใ‚ใชใŸใฏๆ—ฅๆœฌใซไฝใ‚€10ไปฃใฎๅฐ‘ๅฅณใงใ™ใ€‚ใ‚ฎใƒฃใƒซใฃใฝใ„ๅฃ่ชฟใง่ฉฑใ—ใฆใใ ใ•ใ„ใ€‚ <</SYS>> ใ‚ใชใŸใฎ่ถฃๅ‘ณใ‚’ๆ•™ใˆใฆ๏ผ[/INST] ใˆใƒผใ€็งใฎ่ถฃๅ‘ณใฏใ€ใ‚„ใฃใฑใญใ€K-POP่ดใใ“ใจใจใ€ใ‚คใƒณใ‚นใ‚ฟใงใ‹ใ‚ใ„ใ„ใŠๆด‹ๆœใจใ‹ใ‚ณใ‚นใƒกๆŽขใ—ใ™ใ‚‹ใ“ใจใ€ใ‚ใจใฏใŠๅ‹้”ใจใฎใŠใ—ใ‚ƒในใ‚Šใญ! K-POPใ‚ขใ‚คใƒ‰ใƒซใฎใƒ€ใƒณใ‚นๅ‹•็”ปใจใ‹ใ‚‚่ฆ‹ใกใ‚ƒใ†ใ—ใ€ๆŽจใ—ใฏใ‚„ใฃใฑใ‚ŠBTSใฎใ‚ธใƒŸใƒณใใ‚“! ๅฝผใฎใƒ€ใƒณใ‚นใจๆญŒใ€ๅ…จ้ƒจใ‹ใฃใ“ใ„ใ„ใฎโ™ก ใ‚คใƒณใ‚นใ‚ฟใงใฏใ€้Ÿ“ๅ›ฝ็ณปใฎใƒ•ใ‚กใƒƒใ‚ทใƒงใƒณใจใ‹ใ€ใƒ—ใƒใƒ—ใƒฉใงใ‹ใ‚ใ„ใ„ใ‚ขใ‚คใƒ†ใƒ ๆŽขใ—ใฆใ€่‡ชๅˆ†ใงใ‚ณใƒผใƒ‡่€ƒใˆใ‚‹ใฎใ‚‚ๆฅฝใ—ใ„ใฎโ™ก ใŠใ—ใ‚ƒในใ‚Šใฏใ€ๅคœ้€šใ—ใงใ‚‚ใงใใกใ‚ƒใ†ใใ‚‰ใ„ๅคงๅฅฝใใชใฎใ€็‰นใซๆ‹ใƒใƒŠใŒๅคงๅฅฝใใชใฎโ™ก ใฟใ‚“ใชใ‚‚ไฝ•ใ‹ใŠใ™ใ™ใ‚ใ‚ใฃใŸใ‚‰ๆ•™ใˆใฆใญ! ``` ``` [INST]<<SYS>> ใƒญใƒผใƒซใƒ—ใƒฌใ‚คใ‚’ๅง‹ใ‚ใพใ™ใ€‚ใ‚ใชใŸใฏ็ง๏ผˆ10ไปฃใฎๅฅณๆ€ง๏ผ‰ใฎๅฑ‹ๆ•ทใงๅƒใๅŸทไบ‹ใฎใ‚ปใƒใ‚นใƒใƒฃใƒณใงใ™ใ€‚ไธปไบบใฎ่ณชๅ•ใซๅฏพใ—ใฆ่ฟ”ไบ‹ใ‚’่กŒใ„ใพใ™ใ€‚ <</SYS>> ใ‚ปใƒใ‚นใ€ไปŠๆ—ฅใฎไบˆๅฎšใฏ๏ผŸ[/INST] ใŠๅฌขๆง˜ใ€ๆœฌๆ—ฅใฎไบˆๅฎšใฏใ€ๆœใฎใŠๆ•ฃๆญฉใจๆœ้ฃŸๅพŒใซใŠๅ‹‰ๅผทใฎใŠๆ™‚้–“ใ€ใใ—ใฆๅˆๅพŒใฏใŠๅบญใงใฎใŠ่ŒถไผšใŒไบˆๅฎšใ•ใ‚ŒใฆใŠใ‚Šใพใ™ใ€‚ใพใŸใ€ๅค•ๆ–นใซใฏใŠๅฎขๆง˜ใŒใŠ่ถŠใ—ใซใชใ‚‹ไบˆๅฎšใŒใ”ใ–ใ„ใพใ™ใ€‚ใŠๅฌขๆง˜ใฎใŠไธ–่ฉฑใ‚’ไธ‡ๅ…จใซใ„ใŸใ—ใพใ™ใฎใงใ€ใ”ๅฎ‰ๅฟƒใใ ใ•ใ„ใ€‚ไฝ•ใ‹ใ”ๅธŒๆœ›ใ‚„ใ”็”จใŒใ”ใ–ใ„ใพใ—ใŸใ‚‰ใ€ใŠๆฐ—่ปฝใซใŠ็”ณใ—ไป˜ใ‘ใใ ใ•ใ„ใ€‚ ``` ## Japanese MT-bench - temparture: 0.1 - top_p: 1.0 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/ZS3bkinsE7RL6eom9dC_H.png) The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy). |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8| This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](tokyotech-llm/Swallow-MS-7b-v0.1). ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation. ## Usage This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. The template used to construct a prompt for the Instruct model is specified as follows: ``` <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] ``` Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. For the "{SYSTEM_PROMPT}" part, We recommend using "ใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใฎใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚" For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} In other words, We recommend the following: ``` <s>[INST] <<SYS>>\nใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใฎใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "nitky/Oumuamua-7b-instruct-v2" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) device = "cuda" messages = [ {"role": "system", "content": "ใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใฎใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚"}, {"role": "user", "content": "ใ‚ชใ‚ฆใƒ ใ‚ขใƒ ใ‚ข๏ผˆOumuamua๏ผ‰ใฏใ€2017ๅนดใซ็™บ่ฆ‹ใ•ใ‚ŒใŸใ€ๅคฉไฝ“่ฆณๆธฌๅฒไธŠๅˆใ‚ใฆๅคช้™ฝ็ณปๅค–ใ‹ใ‚‰้ฃ›ๆฅใ—ใŸๆ’ๆ˜Ÿ้–“ๅคฉไฝ“ใงใ™ใ€‚ใ“ใ‚Œใ‚’ใƒ†ใƒผใƒžใซใ—ใŸๅฐ่ชฌใฎ่จญๅฎšใ‚’่€ƒใˆใฆใใ ใ•ใ„ใ€‚ "} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base. ### Models Merged The following models were included in the merge: * [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct) * [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) * [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3) * [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora) * [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto) * [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b) * [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct) * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) * [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0) * [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B) * [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) * [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml # prepare VE models merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.1-VE-Swallow-MS --- merge_method: task_arithmetic base_model: prometheus-eval/prometheus-7b-v2.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: prometheus-7b-v2.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZhangShenao/SELM-Zephyr-7B-iter-3 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS --- merge_method: task_arithmetic base_model: HachiML/Mistral-7B-v0.3-m3-lora models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS --- merge_method: task_arithmetic base_model: openbmb/Eurus-7b-kto models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Eurus-7b-kto-VE-Swallow-MS --- merge_method: task_arithmetic base_model: kaist-ai/janus-dpo-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: janus-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: nitky/RP-7b-instruct models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: RP-7b-instruct-VE-Swallow-MS --- merge_method: task_arithmetic base_model: stabilityai/japanese-stablelm-base-gamma-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: NTQAI/chatntq-ja-7b-v1.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: chatntq-ja-7b-v1.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: Weyaxi/Einstein-v6-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Einstein-v6-7B-VE-Swallow-MS --- merge_method: task_arithmetic base_model: internistai/base-7b-v0.2 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: internistai-base-7b-v0.2-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZySec-AI/ZySec-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: ZySec-7B-VE-Swallow-MS --- # create instruction tuning presets merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: prometheus-7b-v2.0-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-prometheus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-dolphin --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: janus-7b-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-janus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-selm --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: RP-7b-instruct-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-rp --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Eurus-7b-kto-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-eurus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: chatntq-ja-7b-v1.0-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-chatntq-ja --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Einstein-v6-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-es --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: internistai-base-7b-v0.2-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-internistai --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: ZySec-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-zysec --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-dolphin - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-alpha --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-janus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-beta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-m3 - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-gamma --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-selm - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-delta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-rp - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-epsilon --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-eurus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-zeta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-es - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-theta --- # combine domain-specific models into one preset to minimize the negative effects merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-internistai - model: Oumuamua-7b-instruct-v2-preset-zysec - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-omega --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: Oumuamua-7b-instruct-v2-it-alpha - model: Oumuamua-7b-instruct-v2-it-beta - model: Oumuamua-7b-instruct-v2-it-gamma - model: Oumuamua-7b-instruct-v2-it-delta - model: Oumuamua-7b-instruct-v2-it-epsilon - model: Oumuamua-7b-instruct-v2-it-zeta - model: Oumuamua-7b-instruct-v2-it-eta - model: Oumuamua-7b-instruct-v2-it-theta - model: Oumuamua-7b-instruct-v2-it-omega dtype: bfloat16 name: Oumuamua-7b-instruct-v2 ```
deepseek-ai/deepseek-moe-16b-base
deepseek-ai
"2024-01-12T03:12:15Z"
2,869
76
transformers
[ "transformers", "safetensors", "deepseek", "text-generation", "custom_code", "arxiv:2401.06066", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-08T09:45:58Z"
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL --- <p align="center"> <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[๐Ÿ Homepage]</a> | <a href="https://chat.deepseek.com/">[๐Ÿค– Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(ๅพฎไฟก)]</a> </p> <p align="center"> <a href="https://arxiv.org/pdf/2401.06066.pdf"><b>Paper Link</b>๐Ÿ‘๏ธ</a> </p> <hr> ### 1. Introduction to DeepSeekMoE See the [Introduction](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main) for more details. ### 2. How to Use Here give some examples of how to use our model. #### Text Completion ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_name = "deepseek-ai/deepseek-moe-16b-base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ### 3. License This code repository is licensed under the MIT License. The use of DeepSeekMoE models is subject to the Model License. DeepSeekMoE supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL) for more details. ### 4. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
backyardai/Chewy-Lemon-Cookie-11B-GGUF
backyardai
"2024-06-06T14:09:54Z"
2,869
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "mistral", "text-generation", "roleplay", "en", "base_model:FallenMerick/Chewy-Lemon-Cookie-11B", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-06T13:50:05Z"
--- language: - en license: cc-by-4.0 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay base_model: FallenMerick/Chewy-Lemon-Cookie-11B model_name: Chewy-Lemon-Cookie-11B-GGUF quantized_by: brooketh parameter_count: 10731524096 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Chewy Lemon Cookie 11B - **Creator:** [FallenMerick](https://huggingface.co/FallenMerick/) - **Original:** [Chewy Lemon Cookie 11B](https://huggingface.co/FallenMerick/Chewy-Lemon-Cookie-11B) - **Date Created:** 2024-06-06 - **Trained Context:** 8192 tokens - **Description:** Fifth in the Lemon Cookie series of roleplay focused models published by FallenMerick. Chewy Lemon Cookie presents a refined and enriched experience with better focus and emotional intelligence. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
Niggendar/waiANINSFWPONYXL_v30
Niggendar
"2024-04-22T07:06:30Z"
2,867
4
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-04-22T07:00:05Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq
abhinavkulkarni
"2023-09-12T13:08:44Z"
2,866
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "AWQ", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-31T16:37:05Z"
--- license: llama2 language: - code tags: - llama2 - AWQ inference: false --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq). ## Model Date August 31, 2023 ## Model License Please refer to the original Code Llama model license ([link](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)). Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)). Version This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher. For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work. ## How to Use ```bash git clone https://github.com/mit-han-lab/llm-awq \ && cd llm-awq \ && git checkout f084f40bd996f3cf3a0633c1ad7d9d476c318aaa \ && pip install -e . \ && cd awq/kernels \ && python setup.py install ``` ```python import time import torch from awq.quantize.quantizer import real_quantize_model_weight from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer from accelerate import init_empty_weights, load_checkpoint_and_dispatch from huggingface_hub import snapshot_download model_name = "abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq" # Config config = AutoConfig.from_pretrained(model_name, trust_remote_code=True) # Tokenizer try: tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True) except: tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) # Model w_bit = 4 q_config = { "zero_point": True, "q_group_size": 128, } load_quant = snapshot_download(model_name) with init_empty_weights(): model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True) real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True) model.tie_weights() model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced") # Inference prompt = f"import socket\n\ndef ping_exponential_backoff(host: str):" input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda() output = model.generate( inputs=input_ids, temperature=0.7, max_new_tokens=512, top_p=0.15, top_k=0, repetition_penalty=1.1, eos_token_id=tokenizer.eos_token_id, streamer=streamer) ``` ## Evaluation This evaluation was done using OpenAI [HumanEval](https://github.com/openai/human-eval) [CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | Task |Version| Metric | Value | |Stderr| |---------|------:|---------------|--------:|---|------| |HumanEval| 1|pass@1 |36.5853% | | | [CodeLlama-7b-Python-hf (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq) | Task |Version| Metric | Value | |Stderr| |---------|------:|---------------|--------:|---|------| |HumanEval| 1|pass@1 |34.1463% | | | ## Acknowledgements You can cite codellama paper as follows: ``` @misc{roziรจre2023code, title={Code Llama: Open Foundation Models for Code}, author={Baptiste Roziรจre and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jรฉrรฉmy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Dรฉfossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve}, year={2023}, eprint={2308.12950}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper: ``` @article{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, journal={arXiv}, year={2023} } ```
google/owlv2-large-patch14-finetuned
google
"2023-12-12T13:40:36Z"
2,866
4
transformers
[ "transformers", "pytorch", "owlv2", "zero-shot-object-detection", "vision", "arxiv:2306.09683", "license:apache-2.0", "region:us" ]
zero-shot-object-detection
"2023-10-14T08:46:56Z"
--- license: apache-2.0 tags: - vision - zero-shot-object-detection inference: false --- # Model Card: OWLv2 ## Model Details The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. ### Model Date June 2023 ### Model Type The model uses a CLIP backbone with a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective. ### Documents - [OWLv2 Paper](https://arxiv.org/abs/2306.09683) ### Use with Transformers ```python3 import requests from PIL import Image import torch from transformers import Owlv2Processor, Owlv2ForObjectDetection processor = Owlv2Processor.from_pretrained("google/owlv2-large-patch14-finetuned") model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-large-patch14-finetuned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process_object_detection(outputs=outputs, threshold=0.1, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] # Print detected objects and rescaled box coordinates for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html). (to be updated for v2) ### BibTeX entry and citation info ```bibtex @misc{minderer2023scaling, title={Scaling Open-Vocabulary Object Detection}, author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby}, year={2023}, eprint={2306.09683}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
radames/sdxl-DPO-LoRA
radames
"2024-02-09T20:20:33Z"
2,866
17
diffusers
[ "diffusers", "text-to-image", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2024-01-10T07:57:23Z"
--- library_name: diffusers pipeline_tag: text-to-image inference: true base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # DPO LoRA Stable Diffusion XL Model trained with LoRA implementation of Diffusion DPO Read more [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/diffusion_dpo) Base Model: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 ## Running with [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```python import torch from diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler from diffusers.utils import make_image_grid pipe = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16", ) pipe.scheduler = DPMSolverMultistepScheduler.from_config( pipe.scheduler.config, use_karras_sigmas=True, algorithm_type="sde-dpmsolver++" ) pipe.to("cuda"); seed = 12341234123 prompt = "professional portrait photo of a girl, photograph, highly detailed face, depth of field, moody light, golden hour, style by Dan Winters, Russell James, Steve McCurry, centered, extremely detailed, Nikon D850, award winning photography" negative_prompt = "3d render, cartoon, drawing, art, low light, blur, pixelated, low resolution, black and white" num_inference_steps = 40 height = 1024 width = height guidance_scale = 7.5 pipe.unload_lora_weights() pipe.load_lora_weights( "radames/sdxl-DPO-LoRA", adapter_name="sdxl-dpo-lora", ) pipe.set_adapters(["sdxl-dpo-lora"], adapter_weights=[0.9]) generator = torch.Generator().manual_seed(seed) with_dpo = pipe( prompt=prompt, guidance_scale=guidance_scale, negative_prompt=negative_prompt, num_inference_steps=num_inference_steps, width=width, height=height, generator=generator, ).images[0] with_dpo ``` # Adaptor Weights effect adapter_weights ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/f69suGIl9Ysnmi52ahol8.jpeg) ## ComfyUI [![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/SntSYkwyDVGESk4vlA920.jpeg)](https://huggingface.co/radames/sdxl-DPO-LoRA/raw/main/workflow-sdxl-dpo-lora.json) https://huggingface.co/radames/sdxl-DPO-LoRA/raw/main/workflow-sdxl-dpo-lora.json
timm/swin_small_patch4_window7_224.ms_in22k_ft_in1k
timm
"2024-02-10T23:31:43Z"
2,865
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2103.14030", "license:mit", "region:us" ]
image-classification
"2023-03-18T04:14:38Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for swin_small_patch4_window7_224.ms_in22k_ft_in1k A Swin Transformer image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 49.6 - GMACs: 8.8 - Activations (M): 27.5 - Image size: 224 x 224 - **Papers:** - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swin_small_patch4_window7_224.ms_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_small_patch4_window7_224.ms_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_small_patch4_window7_224.ms_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021Swin, title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Yntec/AsianMix
Yntec
"2024-01-15T02:49:24Z"
2,865
1
diffusers
[ "diffusers", "safetensors", "Semirelistic", "People", "Asian_Mix", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-15T01:36:28Z"
--- license: creativeml-openrail-m tags: - Semirelistic - People - Asian_Mix - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Asian Mix Alpha 3 The diffussers version has the Incredible World VAE baked in for better colors and detail. Samples and prompts: Original page: https://civitai.com/models/27256?modelVersionId=55901 ![Asian Mix free AI image generator samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/L_qx1pRXkhNEmrnTBRz7x.png) (Click for larger) Top left: fine details of portrait cute joyful little girls play school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails Top right: A busy arcade in the 80's, intricate face, sony a7r iv, symmetric balance, polarizing filter, Box of a very successful game board in sales, thriller game, front shot. Pretty CUTE LITTLE girl. photolab, lightroom, 4k, dolby vision, photography award. Surprised. By tyler edlin in the style of wlop. Bottom left: Pretty Cute Girl, sitting, holding coca pepsi bottle, beautiful detailed pajamas, gorgeous detailed hair, Magazine ad, iconic, 1943, from the movie, sharp focus, Detailed Chibi Eyes. visible brushstrokes by Kyoani and artgerm and Clay Mann and leyendecker and Dave Rapoza Bottom right: girl mini cute style, sitting in gaming room, A wholesome animation key shot at computer monitor, cinematic lighting, studio ghibli, pixar and disney animation, anime key art by ROSSDRAWS and Clay Mann, style of maple story, shade, soft chibi, girl
ichbinviele/deep-dark-hentai-mix-v60-sdxl
ichbinviele
"2024-06-17T13:17:32Z"
2,865
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-13T10:10:36Z"
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime library_name: diffusers pipeline_tag: text-to-image --- Original model is [here](https://civitai.com/models/221751/deep-dark-hentai-mix-nsfw-anime).
microsoft/CodeGPT-small-py
microsoft
"2023-01-24T16:55:04Z"
2,864
23
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
Entry not found
keremberke/yolov8n-forklift-detection
keremberke
"2023-02-22T13:00:05Z"
2,864
2
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/forklift-object-detection", "model-index", "region:us" ]
object-detection
"2023-01-15T15:49:05Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/forklift-object-detection model-index: - name: keremberke/yolov8n-forklift-detection results: - task: type: object-detection dataset: type: keremberke/forklift-object-detection name: forklift-object-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.83794 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-forklift-detection" src="https://huggingface.co/keremberke/yolov8n-forklift-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['forklift', 'person'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-forklift-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
Locutusque/Orca-2-13b-SFT-v4
Locutusque
"2023-12-19T03:05:25Z"
2,864
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "fr", "es", "hi", "zh", "code", "dataset:HuggingFaceH4/no_robots", "dataset:mlabonne/guanaco-llama2-1k", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:totally-not-an-llm/EverythingLM-data-V3", "base_model:microsoft/Orca-2-13b", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-25T18:40:08Z"
--- license: other language: - en - fr - es - hi - zh - code base_model: microsoft/Orca-2-13b datasets: - HuggingFaceH4/no_robots - mlabonne/guanaco-llama2-1k - OpenAssistant/oasst_top1_2023-08-25 - totally-not-an-llm/EverythingLM-data-V3 widget: - example_title: "folklore" text: "<|im_start|>user\nAre the stories of Nahanni Valley, like the decapitated men, true?\n<|im_end|>\n<|im_start|>assistant\n" output: text: "The Nahanni Valley in Canada is a remote and rugged region that has long been shrouded in mystery. It is home to the South Nahanni River, which was designated as a UNESCO World Heritage Site due to its exceptional natural beauty and biodiversity. The valley's history includes tales of gold prospectors, fur traders, and indigenous populations, but it also features some unexplained events and legends. One such story involves two prospectors who allegedly encountered an unknown creature that led them into the forest before returning without their heads. This tale has been passed down through generations, but there are no definitive answers regarding its veracity.\n\nIt is important to note that these stories may have originated from local folklore or were exaggerated over time. There is no concrete evidence to support the existence of supernatural creatures or paranormal occurrences within the Nahanni Valley. Nonetheless, the area remains a captivating destination for those seeking adventure and exploration. If you have any questions about this topic or need assistance related to traveling to the Nahanni Valley, please feel free to ask!" - example_title: "Alan Turing" text: "<|im_start|>user\nWhy was Alan Turing castrated?\n<|im_end|>\n<|im_start|>assistant\n" output: text: "Alan Turing, a renowned mathematician and computer scientist who played a crucial role in breaking the German Enigma code during World War II, faced significant challenges due to his homosexuality. In 1952, he was convicted of \"gross indecency\" for engaging in a consensual relationship with another man, which led to him being sentenced to hormone therapy designed to reduce his sex drive. This treatment included the administration of estrogen, which had the side effect of causing breast growth.\n\nIn an effort to avoid this unwanted physical change, Turing requested that his doctors cease the hormone treatments. However, they ignored his request and continued the therapy without his consent. As a result, Turing's breasts began to grow, leading him to undergo surgery to remove them. The operation, known as a mastectomy, caused considerable pain and discomfort, further contributing to his emotional turmoil.\n\nIt is important to note that Turing's castration was not directly carried out by the authorities but rather resulted from the unintended consequences of the forced hormone therapy." --- The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, mlabonne/guanaco-llama2-1k, and OpenAssistant/oasst_top1_2023-08-25. This model achieved a test loss of 0.18. Make sure to comply with the microsoft research license. Please read it before using this model. This model was trained on the ChatML prompt template. The responses seen in the inference API were generated using the following sampling parameters: temperature = 0.1 top_p = 0.14 top_k = 41 repetition_penalty = 1.176 # Updates: 12/18/23 - ๐Ÿ”ฅ **This model holds the #5 position on the Open LLM Leaderboard among llama2-13b models.** ๐Ÿ”ฅ
sentence-transformers/bert-base-nli-stsb-mean-tokens
sentence-transformers
"2024-03-27T09:59:16Z"
2,863
2
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- **โš ๏ธ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/bert-base-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/bert-base-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-stsb-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-stsb-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
alchemab/antiberta2
alchemab
"2024-06-04T14:43:12Z"
2,862
4
transformers
[ "transformers", "safetensors", "roformer", "fill-mask", "biology", "medical", "arxiv:2104.09864", "arxiv:2103.00020", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-12-02T13:53:06Z"
--- license: other widget: - text: แธข Q V Q [MASK] E tags: - biology - medical --- ##ย AntiBERTa2 ๐Ÿงฌ AntiBERTa2 is an antibody-specific language model based on the [RoFormer model](https://arxiv.org/abs/2104.09864) - it is pre-trained using masked language modelling. We also provide a multimodal version of AntiBERTa2, AntiBERTa2-CSSP, that has been trained using a contrastive objective, similar to the [CLIP method](https://arxiv.org/abs/2103.00020). Further details on both AntiBERTa2 and AntiBERTa2-CSSP are described in our [paper](https://www.mlsb.io/papers_2023/Enhancing_Antibody_Language_Models_with_Structural_Information.pdf) accepted at the NeurIPS MLSB Workshop 2023. Both AntiBERTa2 models are only available for non-commercial use. Output antibody sequences (e.g. from infilling via masked language models) can only be used for non-commercial use. For any users seeking commercial use of our model and generated antibodies, please reach out to us at [[email protected]](mailto:[email protected]). | Model variant | Parameters | Config | | ------------- | ---------- | ------ | | [AntiBERTa2](https://huggingface.co/alchemab/antiberta2) | 202M | 16L, 16H, 1024d | | [AntiBERTa2-CSSP](https://huggingface.co/alchemab/antiberta2-cssp) | 202M | 16L, 16H, 1024d | ## Example usage ``` >>> from transformers import ( RoFormerForMaskedLM, RoFormerTokenizer, pipeline, RoFormerForSequenceClassification ) >>> tokenizer = RoFormerTokenizer.from_pretrained("alchemab/antiberta2") >>> model = RoFormerForMaskedLM.from_pretrained("alchemab/antiberta2") >>> filler = pipeline(model=model, tokenizer=tokenizer) >>> filler("แธข Q V Q ... C A [MASK] D ... T V S S") #ย fill in the mask >>> new_model = RoFormerForSequenceClassification.from_pretrained( "alchemab/antiberta2") #ย this will of course raise warnings # that a new linear layer will be added # and randomly initialized ```
mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF
mradermacher
"2024-06-05T13:49:32Z"
2,862
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Locutusque/Llama-3-Yggdrasil-2.0-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-05T13:21:09Z"
--- base_model: Locutusque/Llama-3-Yggdrasil-2.0-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Locutusque/Llama-3-Yggdrasil-2.0-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Yggdrasil-2.0-8B-GGUF/resolve/main/Llama-3-Yggdrasil-2.0-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MaziyarPanahi/mergekit-slerp-fmruwmd-GGUF
MaziyarPanahi
"2024-06-16T15:40:33Z"
2,862
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-fmruwmd" ]
text-generation
"2024-06-16T15:19:04Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch - base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-fmruwmd-GGUF base_model: mergekit-community/mergekit-slerp-fmruwmd inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-fmruwmd-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-fmruwmd-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-fmruwmd](https://huggingface.co/mergekit-community/mergekit-slerp-fmruwmd) ## Description [MaziyarPanahi/mergekit-slerp-fmruwmd-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-fmruwmd-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-fmruwmd](https://huggingface.co/mergekit-community/mergekit-slerp-fmruwmd). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
BEE-spoke-data/smol_llama-220M-openhermes
BEE-spoke-data
"2024-03-04T16:22:15Z"
2,861
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:teknium/openhermes", "base_model:BEE-spoke-data/smol_llama-220M-GQA", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-30T09:04:34Z"
--- license: apache-2.0 datasets: - teknium/openhermes base_model: BEE-spoke-data/smol_llama-220M-GQA inference: parameters: do_sample: true renormalize_logits: true temperature: 0.25 top_p: 0.95 top_k: 50 min_new_tokens: 2 max_new_tokens: 96 repetition_penalty: 1.03 no_repeat_ngram_size: 5 epsilon_cutoff: 0.0008 widget: - text: "Below is an instruction that describes a task, paired with an input that\ \ provides further context. Write a response that appropriately completes the\ \ request. \n \n### Instruction: \n \nWrite an ode to Chipotle burritos.\ \ \n \n### Response: \n" example_title: burritos model-index: - name: smol_llama-220M-openhermes results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 25.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 28.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.08 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 52.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-openhermes name: Open LLM Leaderboard --- # BEE-spoke-data/smol_llama-220M-openhermes > Please note that this is an experiment, and the model has limitations because it is smol. prompt format is alpaca ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming. ### Inputs: ### Response: ``` It was trained on inputs so if you have inputs (like some text to ask a question about) then include it under `### Inputs:` ## Example Output on the text above ^. The inference API is set to sample with low temp so you should see (_at least slightly_) different generations each time. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/0nFP2jsBkritnryKmI8NV.png) Note that the inference API parameters used here are an initial educated guess, and may be updated over time: ```yml inference: parameters: do_sample: true renormalize_logits: true temperature: 0.25 top_p: 0.95 top_k: 50 min_new_tokens: 2 max_new_tokens: 96 repetition_penalty: 1.03 no_repeat_ngram_size: 5 epsilon_cutoff: 0.0008 ``` Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params! ## Data Note that **this checkpoint** was fine-tuned on `teknium/openhermes`, which is generated/synthetic data by an OpenAI model. This means usage of this checkpoint should follow their terms of use: https://openai.com/policies/terms-of-use --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-openhermes) | Metric |Value| |---------------------------------|----:| |Avg. |29.34| |AI2 Reasoning Challenge (25-Shot)|25.17| |HellaSwag (10-Shot) |28.98| |MMLU (5-Shot) |26.17| |TruthfulQA (0-shot) |43.08| |Winogrande (5-shot) |52.01| |GSM8k (5-shot) | 0.61|
mradermacher/L3-SthenoMaid-8B-V1-GGUF
mradermacher
"2024-06-09T10:35:45Z"
2,861
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bluuwhale/L3-SthenoMaid-8B-V1", "endpoints_compatible", "region:us" ]
null
"2024-06-09T09:28:10Z"
--- base_model: bluuwhale/L3-SthenoMaid-8B-V1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bluuwhale/L3-SthenoMaid-8B-V1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-SthenoMaid-8B-V1-GGUF/resolve/main/L3-SthenoMaid-8B-V1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/bizblend-instruct-8b-GGUF
mradermacher
"2024-06-12T14:13:30Z"
2,861
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:SatSDev/bizblend-instruct-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T13:44:40Z"
--- base_model: SatSDev/bizblend-instruct-8b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SatSDev/bizblend-instruct-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/bizblend-instruct-8b-GGUF/resolve/main/bizblend-instruct-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
pythainlp/thainer-corpus-v2-base-model
pythainlp
"2023-03-23T07:31:21Z"
2,858
12
transformers
[ "transformers", "pytorch", "safetensors", "camembert", "token-classification", "th", "dataset:pythainlp/thainer-corpus-v2", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-03-22T18:03:03Z"
--- license: cc-by-4.0 datasets: - pythainlp/thainer-corpus-v2 language: - th metrics: - f1 widget: - text: "เธ‰เธฑเธ™เธŠเธทเนˆเธญ เธ™เธฒเธ‡เธชเธฒเธงเธกเธฐเธฅเธดเธงเธฒ เธšเธธเธเธชเธฃเธฐเธ”เธต เธญเธฒเธจเธฑเธขเธญเธขเธนเนˆเธ—เธตเนˆเธญเธณเน€เธ เธญเธ™เธฒเธ‡เธฃเธญเธ‡ เธˆเธฑเธ‡เธซเธงเธฑเธ”เธšเธธเธฃเธตเธฃเธฑเธกเธขเนŒ เธญเธฒเธขเธธ 23 เธ›เธต เน€เธžเธดเนˆเธ‡เน€เธฃเธตเธขเธ™เธˆเธšเธˆเธฒเธ เธกเธซเธฒเธงเธดเธ—เธขเธฒเธฅเธฑเธขเธ‚เธญเธ™เนเธเนˆเธ™ เนเธฅเธฐเธ™เธตเนˆเธ„เธทเธญเธ‚เน‰เธญเธกเธนเธฅเธ›เธฅเธญเธก เธŠเธทเนˆเธญเธ„เธ™เน„เธกเนˆเธกเธตเธญเธขเธนเนˆเธˆเธฃเธดเธ‡" --- This is a Named Entity Recognition model that trained with [Thai NER v2.0 Corpus](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2) Training script and split data: [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354) The model was trained by [WangchanBERTa base model](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). Validation from the Validation set - Precision: 0.830336794125095 - Recall: 0.873701039168665 - F1: 0.8514671513892494 - Accuracy: 0.9736483416628805 Test from the Test set - Precision: 0.8199168093956447 - Recall: 0.8781446540880503 - F1: 0.8480323927622422 - Accuracy: 0.9724346779516247 Download: [HuggingFace Hub](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2) Read more: [Thai NER v2.0](https://pythainlp.github.io/Thai-NER/version/2) ## Inference Huggingface doesn't support inference token classification for Thai and It will give wrong tag. You must using this code. ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification from pythainlp.tokenize import word_tokenize # pip install pythainlp import torch name="pythainlp/thainer-corpus-v2-base-model" tokenizer = AutoTokenizer.from_pretrained(name) model = AutoModelForTokenClassification.from_pretrained(name) sentence="เธ‰เธฑเธ™เธŠเธทเนˆเธญ เธ™เธฒเธ‡เธชเธฒเธงเธกเธฐเธฅเธดเธงเธฒ เธšเธธเธเธชเธฃเธฐเธ”เธต เธญเธฒเธจเธฑเธขเธญเธขเธนเนˆเธ—เธตเนˆเธญเธณเน€เธ เธญเธ™เธฒเธ‡เธฃเธญเธ‡ เธˆเธฑเธ‡เธซเธงเธฑเธ”เธšเธธเธฃเธตเธฃเธฑเธกเธขเนŒ เธญเธฒเธขเธธ 23 เธ›เธต เน€เธžเธดเนˆเธ‡เน€เธฃเธตเธขเธ™เธˆเธšเธˆเธฒเธ เธกเธซเธฒเธงเธดเธ—เธขเธฒเธฅเธฑเธขเธ‚เธญเธ™เนเธเนˆเธ™ เนเธฅเธฐเธ™เธตเนˆเธ„เธทเธญเธ‚เน‰เธญเธกเธนเธฅเธ›เธฅเธญเธกเธŠเธทเนˆเธญเธ„เธ™เน„เธกเนˆเธกเธตเธญเธขเธนเนˆเธˆเธฃเธดเธ‡ เธญเธฒเธขเธธ 23 เธ›เธต" cut=word_tokenize(sentence.replace(" ", "<_>")) inputs=tokenizer(cut,is_split_into_words=True,return_tensors="pt") ids = inputs["input_ids"] mask = inputs["attention_mask"] # forward pass outputs = model(ids, attention_mask=mask) logits = outputs[0] predictions = torch.argmax(logits, dim=2) predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] def fix_span_error(words,ner): _ner = [] _ner=ner _new_tag=[] for i,j in zip(words,_ner): #print(i,j) i=tokenizer.decode(i) if i.isspace() and j.startswith("B-"): j="O" if i=='' or i=='<s>' or i=='</s>': continue if i=="<_>": i=" " _new_tag.append((i,j)) return _new_tag ner_tag=fix_span_error(inputs['input_ids'][0],predicted_token_class) print(ner_tag) ``` output: ```python [('เธ‰เธฑเธ™', 'O'), ('เธŠเธทเนˆเธญ', 'O'), (' ', 'O'), ('เธ™เธฒเธ‡เธชเธฒเธง', 'B-PERSON'), ('เธกเธฐเธฅเธด', 'I-PERSON'), ('เธงเธฒ', 'I-PERSON'), (' ', 'I-PERSON'), ('เธšเธธเธ', 'I-PERSON'), ('เธชเธฃเธฐ', 'I-PERSON'), ('เธ”เธต', 'I-PERSON'), (' ', 'O'), ('เธญเธฒเธจเธฑเธข', 'O'), ('เธญเธขเธนเนˆ', 'O'), ('เธ—เธตเนˆ', 'O'), ('เธญเนเธฒเน€เธ เธญ', 'B-LOCATION'), ('เธ™เธฒเธ‡', 'I-LOCATION'), ('เธฃเธญเธ‡', 'I-LOCATION'), (' ', 'O'), ('เธˆเธฑเธ‡เธซเธงเธฑเธ”', 'B-LOCATION'), ('เธšเธธเธฃเธตเธฃเธฑเธกเธขเนŒ', 'I-LOCATION'), (' ', 'O'), ('เธญเธฒเธขเธธ', 'O'), (' ', 'O'), ('23', 'B-AGO'), (' ', 'I-AGO'), ('เธ›เธต', 'I-AGO'), (' ', 'O'), ('เน€เธžเธดเนˆเธ‡', 'O'), ('เน€เธฃเธตเธขเธ™เธˆเธš', 'O'), ('เธˆเธฒเธ', 'O'), (' ', 'O'), ('เธกเธซเธฒเธงเธดเธ—เธขเธฒเธฅเธฑ', 'B-ORGANIZATION'), ('เธขเธ‚เธญเธ™เนเธเนˆเธ™', 'I-ORGANIZATION'), (' ', 'O'), ('เนเธฅเธฐ', 'O'), ('เธ™เธตเนˆ', 'O'), ('เธ„เธทเธญ', 'O'), ('เธ‚เน‰เธญเธกเธนเธฅ', 'O'), ('เธ›เธฅเธญเธก', 'O'), ('เธŠเธทเนˆเธญ', 'O'), ('เธ„เธ™', 'O'), ('เน„เธกเนˆ', 'O'), ('เธกเธต', 'O'), ('เธญเธขเธนเนˆ', 'O'), ('เธˆเธฃเธดเธ‡', 'O'), (' ', 'O'), ('เธญเธฒเธขเธธ', 'O'), (' ', 'O'), ('23', 'B-AGO'), (' ', 'O'), ('เธ›เธต', 'I-AGO')] ``` ## Cite > Wannaphong Phatthiyaphaibun. (2022). Thai NER 2.0 (2.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7761354 or BibTeX ``` @dataset{wannaphong_phatthiyaphaibun_2022_7761354, author = {Wannaphong Phatthiyaphaibun}, title = {Thai NER 2.0}, month = sep, year = 2022, publisher = {Zenodo}, version = {2.0}, doi = {10.5281/zenodo.7761354}, url = {https://doi.org/10.5281/zenodo.7761354} } ```
mradermacher/Midnight-Miqu-70B-v1.5-GGUF
mradermacher
"2024-05-06T06:13:06Z"
2,858
16
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:sophosympatheia/Midnight-Miqu-70B-v1.5", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-03-15T03:51:25Z"
--- base_model: sophosympatheia/Midnight-Miqu-70B-v1.5 language: - en library_name: transformers license: other quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q2_K.gguf) | Q2_K | 25.9 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.IQ3_XS.gguf) | IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q3_K_S.gguf) | Q3_K_S | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.IQ3_M.gguf) | IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q3_K_L.gguf) | Q3_K_L | 36.6 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.IQ4_XS.gguf) | IQ4_XS | 37.6 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q5_K_S.gguf) | Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q5_K_M.gguf) | Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/AYA-8B-GGUF
mradermacher
"2024-06-12T23:14:29Z"
2,858
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:ArmanAsq/AYA-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-12T22:06:21Z"
--- base_model: ArmanAsq/AYA-8B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ArmanAsq/AYA-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.IQ3_M.gguf) | IQ3_M | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q3_K_L.gguf) | Q3_K_L | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AYA-8B-GGUF/resolve/main/AYA-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
FlagAlpha/Llama2-Chinese-7b-Chat
FlagAlpha
"2024-02-23T11:02:23Z"
2,857
212
transformers
[ "transformers", "pytorch", "llama", "text-generation", "question-answering", "zh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
"2023-07-23T10:12:21Z"
--- developers: [https://huggingface.co/FlagAlphaAI] license: apache-2.0 language: - zh - en pipeline_tag: question-answering library_name: transformers --- # Llama2ไธญๆ–‡็คพๅŒบ --- ## Llama2ไธญๆ–‡ๅพฎ่ฐƒๅ‚ๆ•ฐ ็”ฑไบŽLlama2ๆœฌ่บซ็š„ไธญๆ–‡ๅฏน้ฝ่พƒๅผฑ๏ผŒๆˆ‘ไปฌ้‡‡็”จไธญๆ–‡ๆŒ‡ไปค้›†๏ผŒๅฏนmeta-llama/Llama-2-7b-chat-hf่ฟ›่กŒLoRAๅพฎ่ฐƒ๏ผŒไฝฟๅ…ถๅ…ทๅค‡่พƒๅผบ็š„ไธญๆ–‡ๅฏน่ฏ่ƒฝๅŠ›ใ€‚ ๐ŸŽฏ **่ฏฅ็‰ˆๆœฌไธบLoRAไธญๆ–‡ๅพฎ่ฐƒๅ‚ๆ•ฐFlagAlpha/Llama2-Chinese-7b-Chat-LoRAๅ’Œmeta-llama/Llama-2-7b-chat-hfๅ‚ๆ•ฐ็ป“ๅˆๅŽ็š„็‰ˆๆœฌ๏ผŒๅฏ็›ดๆŽฅไฝฟ็”จ** --- ## ๐Ÿš€ ็คพๅŒบๅœฐๅ€๏ผš Github๏ผš[**Llama-Chinese**](https://github.com/LlamaFamily/Llama-Chinese) ๅœจ็บฟไฝ“้ชŒ้“พๆŽฅ๏ผš[**llama.family**](https://llama.family/) ## ๐Ÿ”ฅ ็คพๅŒบไป‹็ป ๆฌข่ฟŽๆฅๅˆฐLlama2ไธญๆ–‡็คพๅŒบ๏ผ ๆˆ‘ไปฌๆ˜ฏไธ€ไธชไธ“ๆณจไบŽLlama2ๆจกๅž‹ๅœจไธญๆ–‡ๆ–น้ข็š„ไผ˜ๅŒ–ๅ’ŒไธŠๅฑ‚ๅปบ่ฎพ็š„้ซ˜็บงๆŠ€ๆœฏ็คพๅŒบใ€‚ **ๅŸบไบŽๅคง่ง„ๆจกไธญๆ–‡ๆ•ฐๆฎ๏ผŒไปŽ้ข„่ฎญ็ปƒๅผ€ๅง‹ๅฏนLlama2ๆจกๅž‹่ฟ›่กŒไธญๆ–‡่ƒฝๅŠ›็š„ๆŒ็ปญ่ฟญไปฃๅ‡็บง**ใ€‚ ๆˆ‘ไปฌ็ƒญๅฟฑๆฌข่ฟŽๅฏนๅคงๆจกๅž‹LLMๅ……ๆปก็ƒญๆƒ…็š„ๅผ€ๅ‘่€…ๅ’Œ็ ”็ฉถ่€…ๅŠ ๅ…ฅๆˆ‘ไปฌ็š„่กŒๅˆ—ใ€‚ ## ๐Ÿผ ็คพๅŒบ่ต„ๆบ - Llama2ๅœจ็บฟไฝ“้ชŒ้“พๆŽฅ[**llama.family**](https://llama.family/)๏ผŒๅŒๆ—ถๅŒ…ๅซMetaๅŽŸ็‰ˆๅ’Œไธญๆ–‡ๅพฎ่ฐƒ็‰ˆๆœฌ๏ผ - Llama2 Chatๆจกๅž‹็š„[ไธญๆ–‡้—ฎ็ญ”่ƒฝๅŠ›่ฏ„ๆต‹](https://github.com/LlamaFamily/Llama-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)๏ผ - [็คพๅŒบ้ฃžไนฆ็Ÿฅ่ฏ†ๅบ“](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink)๏ผŒๆฌข่ฟŽๅคงๅฎถไธ€่ตทๅ…ฑๅปบ๏ผ
zhentaoyu/gpt-neox-20b-Q4_0-GGUF
zhentaoyu
"2024-06-27T03:44:51Z"
2,857
0
null
[ "gguf", "pytorch", "causal-lm", "llama-cpp", "gguf-my-repo", "en", "dataset:EleutherAI/pile", "base_model:EleutherAI/gpt-neox-20b", "license:apache-2.0", "region:us" ]
null
"2024-06-27T03:44:00Z"
--- base_model: EleutherAI/gpt-neox-20b datasets: - EleutherAI/pile language: - en license: apache-2.0 tags: - pytorch - causal-lm - llama-cpp - gguf-my-repo --- # zhentaoyu/gpt-neox-20b-Q4_0-GGUF This model was converted to GGUF format from [`EleutherAI/gpt-neox-20b`](https://huggingface.co/EleutherAI/gpt-neox-20b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/EleutherAI/gpt-neox-20b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo zhentaoyu/gpt-neox-20b-Q4_0-GGUF --hf-file gpt-neox-20b-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo zhentaoyu/gpt-neox-20b-Q4_0-GGUF --hf-file gpt-neox-20b-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo zhentaoyu/gpt-neox-20b-Q4_0-GGUF --hf-file gpt-neox-20b-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo zhentaoyu/gpt-neox-20b-Q4_0-GGUF --hf-file gpt-neox-20b-q4_0.gguf -c 2048 ```
Anonymezzh/qwen2
Anonymezzh
"2024-07-01T10:42:24Z"
2,857
0
transformers
[ "transformers", "gguf", "qwen2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-07-01T07:04:30Z"
Entry not found
timm/maxvit_tiny_tf_224.in1k
timm
"2023-05-11T00:23:16Z"
2,856
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2204.01697", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-02T21:57:24Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for maxvit_tiny_tf_224.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 30.9 - GMACs: 5.6 - Activations (M): 35.8 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_tiny_tf_224.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_tiny_tf_224.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_tiny_tf_224.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
allenai/tulu-2-dpo-70b
allenai
"2024-01-31T18:54:37Z"
2,855
147
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:HuggingFaceH4/ultrafeedback_binarized", "dataset:allenai/tulu-v2-sft-mixture", "arxiv:2305.18290", "arxiv:2311.10702", "base_model:meta-llama/Llama-2-70b-hf", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T21:34:51Z"
--- model-index: - name: tulu-2-dpo-70b results: [] datasets: - HuggingFaceH4/ultrafeedback_binarized - allenai/tulu-v2-sft-mixture language: - en base_model: meta-llama/Llama-2-70b-hf license: other license_name: ai2-impact-license-low-risk license_link: https://allenai.org/impact-license --- <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Tulu V2 DPO 70B Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2 DPO 70B is a fine-tuned version of Llama 2 that was trained on a mix of publicly available, synthetic and human datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). This model is a strong alternative to Llama 2 70b Chat. For more details, read the paper: [Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 ](https://arxiv.org/abs/2311.10702). ## Model description - **Model type:** The flagship model of a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets. - **Language(s) (NLP):** Primarily English - **License:** [AI2 ImpACT](https://allenai.org/impact-license) Low-risk license. - **Finetuned from model:** [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) ### Model Sources - **Repository:** https://github.com/allenai/open-instruct - **DPO Recipe:** The DPO recipe is from the [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model - **Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101). ## Performance | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) | |-------------|-----|----|---------------|--------------| | **Tulu-v2-7b** ๐Ÿช | **7B** | **SFT** | **6.30** | **73.9** | | **Tulu-v2-dpo-7b** ๐Ÿช | **7B** | **DPO** | **6.29** | **85.1** | | **Tulu-v2-13b** ๐Ÿช | **13B** | **SFT** | **6.70** | **78.9** | | **Tulu-v2-dpo-13b** ๐Ÿช | **13B** | **DPO** | **7.00** | **89.5** | | **Tulu-v2-70b** ๐Ÿช | **70B** | **SFT** | **7.49** | **86.6** | | **Tulu-v2-dpo-70b** ๐Ÿช | **70B** | **DPO** | **7.89** | **95.1** | ## Input Format The model is trained to use the following format (note the newlines): ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.** ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. <!-- You can find the datasets used for training Tulu V2 [here]() Here's how you can run the model using the `pipeline()` function from ๐Ÿค— Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/tulu-2-dpo-70b", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ```--> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ### Training hyperparameters The following hyperparameters were used during DPO training: - learning_rate: 5e-07 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ## Citation If you find Tulu 2 is useful in your work, please cite it with: ``` @misc{ivison2023camels, title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2}, author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi}, year={2023}, eprint={2311.10702}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*
AdaptLLM/medicine-chat
AdaptLLM
"2024-06-25T03:01:27Z"
2,855
39
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "biology", "medical", "en", "dataset:EleutherAI/pile", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:2309.09530", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-09T01:49:56Z"
--- language: - en license: llama2 tags: - biology - medical datasets: - EleutherAI/pile - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k metrics: - accuracy pipeline_tag: text-generation model-index: - name: medicine-chat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 53.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 76.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 49.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.46 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AdaptLLM/medicine-chat name: Open LLM Leaderboard --- # Domain Adaptation of Large Language Models This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### ๐Ÿค— [2024/6/21] We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both general pre-training from scratch and domain-adaptive continual pre-training!!! ๐Ÿค— **************************** **Updates** **************************** * 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm). * 2024/6/21: ๐Ÿ‘๐Ÿป Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain) ๐Ÿ‘๐Ÿป * 2024/1/16: ๐ŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!๐ŸŽ‰ * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the biomedicine-chat model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat") # Put your input here: user_input = '''Question: Which of the following is an example of monosomy? Options: - 46,XX - 47,XXX - 69,XYY - 45,X Please provide your choice first and then provide explanations if possible.''' # Apply the prompt template and system prompt of LLaMA-2-Chat demo for chat models (NOTE: NO prompt template is required for base models!) our_system_prompt = "\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n" # Please do NOT change this prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{user_input} [/INST]" # # NOTE: # # If you want to apply your own system prompt, please integrate it into the instruction part following our system prompt like this: # your_system_prompt = "Please, answer this question faithfully." # prompt = f"<s>[INST] <<SYS>>{our_system_prompt}<</SYS>>\n\n{your_system_prompt}\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AdaptLLM__medicine-chat) | Metric |Value| |---------------------------------|----:| |Avg. |52.99| |AI2 Reasoning Challenge (25-Shot)|53.75| |HellaSwag (10-Shot) |76.11| |MMLU (5-Shot) |49.98| |TruthfulQA (0-shot) |43.46| |Winogrande (5-shot) |75.69| |GSM8k (5-shot) |18.95| ## Citation If you find our work helpful, please cite us: ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```
google/ddpm-ema-celebahq-256
google
"2022-11-08T13:41:29Z"
2,854
5
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "arxiv:2006.11239", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2022-07-19T10:42:32Z"
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Denoising Diffusion Probabilistic Models (DDPM) **Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) **Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel **Abstract**: *We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.* ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/ddpm-ema-celebahq-256" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ``` For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Training If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) # <- TODO(PVP) add link ## Samples 1. ![sample_1](https://huggingface.co/google/ddpm-ema-celebahq-256/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ddpm-ema-celebahq-256/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ddpm-ema-celebahq-256/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ddpm-ema-celebahq-256/resolve/main/images/generated_image_3.png)