modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
meta-llama/CodeLlama-7b-Instruct-hf
meta-llama
"2024-03-14T18:40:59Z"
10,127
16
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "conversational", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T19:44:23Z"
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. USE POLICY ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected]) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - code pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) | | 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) | | 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) | | 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 7B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
timm/maxvit_rmlp_pico_rw_256.sw_in1k
timm
"2023-05-11T00:19:06Z"
10,125
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2204.01697", "arxiv:2111.09883", "license:apache-2.0", "region:us" ]
image-classification
"2023-01-20T21:33:59Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for maxvit_rmlp_pico_rw_256.sw_in1k A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 7.5 - GMACs: 1.8 - Activations (M): 24.9 - Image size: 256 x 256 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_rmlp_pico_rw_256.sw_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_rmlp_pico_rw_256.sw_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 128, 128]) # torch.Size([1, 32, 64, 64]) # torch.Size([1, 64, 32, 32]) # torch.Size([1, 128, 16, 16]) # torch.Size([1, 256, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_rmlp_pico_rw_256.sw_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 256, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF
mradermacher
"2024-06-20T05:09:50Z"
10,122
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "Sao10K/L3-8B-Stheno-v3.2", "Sao10K/L3-8B-Stheno-v3.1", "not-for-all-audiences", "en", "base_model:jsfs11/L3-8B-Stheno-2x8B-MoE", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T04:17:12Z"
--- base_model: jsfs11/L3-8B-Stheno-2x8B-MoE language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - Sao10K/L3-8B-Stheno-v3.2 - Sao10K/L3-8B-Stheno-v3.1 - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jsfs11/L3-8B-Stheno-2x8B-MoE <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q2_K.gguf) | Q2_K | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q3_K_L.gguf) | Q3_K_L | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q6_K.gguf) | Q6_K | 11.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-2x8B-MoE-GGUF/resolve/main/L3-8B-Stheno-2x8B-MoE.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yiyanghkust/finbert-pretrain
yiyanghkust
"2022-10-17T00:38:42Z"
10,121
31
transformers
[ "transformers", "pytorch", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens. - Corporate Reports 10-K & 10-Q: 2.5B tokens - Earnings Call Transcripts: 1.3B tokens - Analyst Reports: 1.1B tokens If you use the model in your academic work, please cite the following papers: Huang, Allen H., Hui Wang, and Yi Yang. "FinBERT: A Large Language Model for Extracting Information from Financial Text." *Contemporary Accounting Research* (2022). Yang, Yi, Mark Christopher Siy Uy, and Allen Huang. "Finbert: A pretrained language model for financial communications." *arXiv preprint arXiv:2006.08097* (2020). `FinBERT` can be further fine-tuned on downstream tasks. Specifically, we have fine-tuned `FinBERT` for financial sentiment analysis, ESG classification, Forward-looking statement classification and etc. Visit [FinBERT.AI](https://finbert.ai/) for more details on these task-specific models and recent development of FinBERT.
mys/ggml_bakllava-1
mys
"2023-10-19T14:24:27Z"
10,119
71
null
[ "gguf", "llava", "bakllava", "lmm", "ggml", "llama.cpp", "region:us" ]
null
"2023-10-19T14:15:55Z"
--- tags: - llava - bakllava - lmm - ggml - llama.cpp --- # ggml_bakllava-1 This repo contains GGUF files to inference [BakLLaVA-1](https://huggingface.co/SkunkworksAI/BakLLaVA-1) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency. **Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
GAIR/Abel-7B-002
GAIR
"2023-12-12T13:31:56Z"
10,115
12
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-11T14:40:39Z"
We released **Abel-7B-002**, resulting in a stronger (35% improvement on GSM8K, 126% improvement on MATH) and more generalized model, achieving the best performance among all 7B models (80.44 on GSM8K, 29.46 on MATH) | Model | GSM8k | MATH |MathQA | SVAMP |SCQ5K-EN | ARC-E|ARC-C|HellaSwag|MMLU | |-----------|------------|----------|--------------|-----------|----------------|---------|----------|---------------|----------| |Abel-7B-002 | **80.44** | **29.46** | **69.78** |77.67 |**55.95** |77.67 |**55.05** |77.72 |61.19 | |Abel-7B-001 |59.74 |13 |1.21 |57.67 |9.3 |53.32 |38.97 |63.51|40.59 | |MetaMath-Mistral-7B|77.7 |28.2 |33.94 |**79.33** |37.6| **78.48** |51.93 |76.44| 61.93| |Qwen-7b|47.84 |9.34 |27.44 |53 |40.05 |74.97 |53.05 |**86.85**|57.98 | |Mistral-7b|37.83 |9.06 |25.73 |63 |39.6 |76.83 |53.22| 76.31|**64.05** | |Yi-6b| 32.6 |5.78 |26.98 |55.67 |35.5 |73.66 |49.53 |68.97|64.02 | |LLaMA2-7b|12.96 |2.78 |11.52 |44 |28.24 |71.12 |46.61 |71.32|46.7 |
mradermacher/Yi-6B-200K-GGUF
mradermacher
"2024-06-26T23:40:32Z"
10,111
0
transformers
[ "transformers", "gguf", "en", "base_model:01-ai/Yi-6B-200K", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T16:53:44Z"
--- base_model: 01-ai/Yi-6B-200K language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/01-ai/Yi-6B-200K <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-6B-200K-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q2_K.gguf) | Q2_K | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.IQ3_XS.gguf) | IQ3_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q3_K_S.gguf) | Q3_K_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.IQ3_S.gguf) | IQ3_S | 2.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.IQ3_M.gguf) | IQ3_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q3_K_L.gguf) | Q3_K_L | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.IQ4_XS.gguf) | IQ4_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q5_K_S.gguf) | Q5_K_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q5_K_M.gguf) | Q5_K_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q6_K.gguf) | Q6_K | 5.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.Q8_0.gguf) | Q8_0 | 6.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Yi-6B-200K-GGUF/resolve/main/Yi-6B-200K.f16.gguf) | f16 | 12.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
digiplay/aurorafantasy_v1
digiplay
"2024-04-06T02:08:17Z"
10,107
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-04-05T16:06:03Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/116387/aurorafantasy Original Author's DEMO image: ![00039-2286537344 (1).png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/j-2gdVVfdzHAj6STl7kNj.png) Sample image generated by Huggingface's API : (masterpiece,best quality:1.2), 1girl platinum blonde, Cloud Hair, winter,close up,hose, (cinematic lightings), :D ![525e2da5-043d-4cc3-887a-1329689493a7.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/5c5qkS6-4ujjieY_aQu3A.jpeg)
mradermacher/Mistral-11B-v0.3-GGUF
mradermacher
"2024-06-29T17:51:18Z"
10,102
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-v0.3", "en", "base_model:Corianas/Mistral-11B-v0.3", "endpoints_compatible", "region:us" ]
null
"2024-06-29T17:15:11Z"
--- base_model: Corianas/Mistral-11B-v0.3 language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mistralai/Mistral-7B-v0.3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Corianas/Mistral-11B-v0.3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-11B-v0.3-GGUF/resolve/main/Mistral-11B-v0.3.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PrunaAI/NousResearch-Llama-2-7b-chat-hf-GGUF-smashed
PrunaAI
"2024-07-02T02:34:50Z"
10,097
0
null
[ "gguf", "pruna-ai", "region:us" ]
null
"2024-07-02T01:58:58Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu) ## This repo contains GGUF versions of the NousResearch/Llama-2-7b-chat-hf model. # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: NousResearch-Llama-2-7b-chat-hf-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download NousResearch-Llama-2-7b-chat-hf-GGUF-smashed Llama-2-7b-chat-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download NousResearch-Llama-2-7b-chat-hf-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download NousResearch-Llama-2-7b-chat-hf-GGUF-smashed Llama-2-7b-chat-hf.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Llama-2-7b-chat-hf.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Llama-2-7b-chat-hf.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {{prompt}} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Llama-2-7b-chat-hf.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {{"role": "system", "content": "You are a story writing assistant."}}, {{ "role": "user", "content": "Write a story about llamas." }} ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5
failspy
"2024-05-30T14:27:00Z"
10,083
25
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-28T17:51:28Z"
--- library_name: transformers license: llama3 --- # Llama-3-70B-Instruct-abliterated-v3.5 Model Card [My original Jupyter "cookbook" to replicate the methodology can be found here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) [My personal library o' code used](https://github.com/FailSpy/abliterator) (WIP, looking to improve and generalize) This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## V3.5? Second try. I felt that the V3 methodology of 70B wasn't well applied, and u/Nexesenex on reddit kinda confirmed my suspicions. So go blame them. :P This one has only a single layer modified(!) and that seems to have completely eliminated moralizing disclaimers. I hope you'll find this model better than 70B-V3! As well, this also fixes the tokenizer. ## Hang on, "abliteration"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2 70B? Well, I released a V2 a while back for 8B under Cognitive Computations. It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
EleutherAI/llemma_7b
EleutherAI
"2024-02-19T08:18:53Z"
10,072
90
transformers
[ "transformers", "pytorch", "llama", "text-generation", "math", "reasoning", "en", "dataset:EleutherAI/proof-pile-2", "dataset:open-web-math/open-web-math", "arxiv:2310.10631", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T22:09:33Z"
--- license: llama2 datasets: - EleutherAI/proof-pile-2 - open-web-math/open-web-math language: - en tags: - math - reasoning --- <img src="llemma.png" width="400"> [ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/) [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/) **Llemma 7B** is a language model for mathematics. It was initialized with [Code Llama 7B](https://github.com/facebookresearch/codellama) weights, and trained on the [Proof-Pile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) for 200B tokens. This model also comes in a 34B parameter version: [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). ## Evaluations Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers. ### Chain-of-thought Math On chain-of-thought mathematics tasks, Llemma models outperform Llama-2, Code Llama, and when controlled for model size, outperform Minerva. | Model | Size | GSM8k | [OCW](https://openreview.net/forum?id=IFXTZERXdM7) | MMLU-STEM | [SAT](https://huggingface.co/datasets/mcaleste/sat_multiple_choice_math_may_23) | MATH | |------------|------|--------|-------|-----------|-------|-------| | Llama 2 | 7B | 11.8% | 3.7% | 29.9% | 25% | 3.2% | | Code Llama | 7B | 10.5% | 4.4% | 25.1% | 9.4% | 4.5% | | LLEMMA | 7B | **36.4%** | **7.7%** | **37.7%** | **53.1%** | **18.0%** | | Minerva | 8B | 16.2% | **7.7%** | 35.6% | - | 14.1% | |------------|------|--------|-------|-----------|-------|-------| | Code Llama | 34B | 29.6% | 7.0% | 40.5% | 40.6% | 12.2% | | LLEMMA | 34B | **51.5%** | **11.8%** | **49.0%** | **71.9%** | **25.0%** | |------------|------|--------|-------|-----------|-------|-------| | Minerva | 62B | 52.4% | 12.0% | 53.9% | - | 27.6% | | Minerva | 540B | 58.8% | 17.6% | 63.9% | - | 33.6% | Further performance can be extracted by using majority voting: | Model | Size | GSM8k maj@100 | OCW maj@100 | MMLU-STEM maj@16 | SAT maj@16 | MATH maj@256 | |---------|------|-------------|-----------|-----------------|-----------|------------| | LLEMMA | 7B | 54.0% | 14.3% | 49.9% | 78.1% | **33.5** | | Minerva | 8B | 28.4% | 12.5% | 43.4% | - | 25.4% | |---------|------|-------------|-----------|-----------------|-----------|------------| | LLEMMA | 34B | 69.3% | 18.4% | 59.7% | 81.3% | **43.1%** | |---------|------|-------------|-----------|-----------------|-----------|------------| | Minerva | 62B | 68.5% | 23.5% | 63.5% | - | 43.4% | | Minerva | 540B | 78.5% | 30.8% | 75.0% | - | 50.3% | ### Tool Use and Theorem Proving In addition to chain-of-thought reasoning, Llemma has strong capabilities in computational mathematics tasks. For tool use and formal theorem proving evaluations, see [our paper](http://arxiv.org/abs/2310.10631). ### Citation ``` @misc{azerbayev2023llemma, title={Llemma: An Open Language Model For Mathematics}, author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck}, year={2023}, eprint={2310.10631}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
MingZhong/unieval-dialog
MingZhong
"2022-10-14T01:09:17Z"
10,063
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2210.07197", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-10-11T01:08:51Z"
Pre-trained evaluator in EMNLP 2022 paper *[Towards a Unified Multi-Dimensional Evaluator for Text Generation](https://arxiv.org/abs/2210.07197)* ## Introduction **Multi-dimensional evaluation** is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency. However, automatic evaluation in NLG is still dominated by similarity-based metrics (e.g., ROUGE, BLEU), but they are not sufficient to portray the difference between the advanced generation models. Therefore, we propose **UniEval** to bridge this gap so that a more comprehensive and fine-grained evaluation of NLG systems can be achieved. ## Pre-trained Evaluator **unieval-dialog** is the pre-trained evaluator for the dialogue response generation task. It can evaluate the model output from five dimensions: - *naturalness* - *coherence* - *engagingness* - *groundedness* - *understandability* ## Usage Please refer to [our GitHub repository](https://github.com/maszhongming/UniEval).
openlm-research/open_llama_7b_v2
openlm-research
"2023-07-07T21:26:13Z"
10,060
112
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-06T08:23:04Z"
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T library_name: transformers --- # OpenLLaMA: An Open Reproduction of LLaMA **TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM ## v2 models model_path = 'openlm-research/open_llama_7b_v2' ## v1 models # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' # model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. ## Dataset and Training The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
radlab/polish-cross-encoder
radlab
"2023-12-31T13:52:13Z"
10,056
3
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "text-classification", "feature-extraction", "sentence-similarity", "transformers", "pl", "dataset:radlab/polish-sts-dataset", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-12-03T02:42:57Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - pl license: cc-by-sa-4.0 library_name: sentence-transformers datasets: - radlab/polish-sts-dataset models: - sdadas/polish-roberta-large-v2 --- ## Sample model usage Below is an example of using the model: ```python from sentence_transformers.cross_encoder import CrossEncoder model_path = "radlab/polish-cross-encoder" model = CrossEncoder(model_path) questions = [ "Jaką mamy dziś pogodę? bo Andrzej nic nie mówił.", "Gdzie jedzie Andrzej? Bo wczoraj był w Warszawie.", "Czy oskarżony się zgadza z przedstawionym wyrokiem?", ] answers = [ "Pan Andrzej siedzi w pociągu i jedzie do Wiednia. Ogląda na telefonie zabawne filmiki.", "Poada deszcz i jest wilgotno, jednak wczoraj było słonecznie.", "Wyrok jest prawomocny i nie podlega dalszym rozważaniom.", ] for question in questions: context_with_question = [(s, question) for s in answers] results = sorted( { idx: r for idx, r in enumerate(model.predict(context_with_question)) }.items(), key=lambda x: x[1], reverse=True, ) print(f"QUESTION: {question}") print("ANSWERS (sorted):") for idx, score in results: print(f"\t[{score}]\t{answers[idx]}") print("") ``` and output to the standard output: ``` QUESTION: Jaką mamy dziś pogodę? bo Andrzej nic nie mówił. ANSWERS (sorted): [0.016749681904911995] Poada deszcz i jest wilgotno, jednak wczoraj było słonecznie. [0.01602918468415737] Pan Andrzej siedzi w pociągu i jedzie do Wiednia. Ogląda na telefonie zabawne filmiki. [0.016013670712709427] Wyrok jest prawomocny i nie podlega dalszym rozważaniom. QUESTION: Gdzie jedzie Andrzej? Bo wczoraj był w Warszawie. ANSWERS (sorted): [0.5997582674026489] Pan Andrzej siedzi w pociągu i jedzie do Wiednia. Ogląda na telefonie zabawne filmiki. [0.4528200924396515] Wyrok jest prawomocny i nie podlega dalszym rozważaniom. [0.17350871860980988] Poada deszcz i jest wilgotno, jednak wczoraj było słonecznie. QUESTION: Czy oskarżony się zgadza z przedstawionym wyrokiem? ANSWERS (sorted): [0.8431766629219055] Wyrok jest prawomocny i nie podlega dalszym rozważaniom. [0.6823258996009827] Poada deszcz i jest wilgotno, jednak wczoraj było słonecznie. [0.558414101600647] Pan Andrzej siedzi w pociągu i jedzie do Wiednia. Ogląda na telefonie zabawne filmiki. ```
neopolita/instruction-synthesizer-gguf
neopolita
"2024-06-30T00:43:02Z"
10,050
0
null
[ "gguf", "region:us" ]
null
"2024-06-30T00:08:43Z"
--- {} --- # GGUF quants for [**instruction-pretrain/instruction-synthesizer**](https://huggingface.co/instruction-pretrain/instruction-synthesizer) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/instruction-pretrain/instruction-synthesizer) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
climatebert/renewable
climatebert
"2023-07-18T10:11:55Z"
10,045
4
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-07-10T11:05:29Z"
--- license: apache-2.0 --- # Model Card for renewable ## Model Description This is the fine-tuned ClimateBERT language model with a classification head for detecting sentences that are related to renewable energy. Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model as starting point, the distilroberta-base-climate-detector model is fine-tuned on our human-annotated dataset. ## Citation Information ```bibtex @article{deng2023war, title={War and Policy: Investor Expectations on the Net-Zero Transition}, author={Deng, Ming and Leippold, Markus and Wagner, Alexander F and Wang, Qian}, journal={Swiss Finance Institute Research Paper}, number={22-29}, year={2023} } ``` ## How to Get Started With the Model You can use the model with a pipeline for text classification: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets from tqdm.auto import tqdm dataset_name = "climatebert/climate_detection" tokenizer_name = “"climatebert/distilroberta-base-climate-detector" model_name = "climatebert/renewable" # If you want to use your own data, simply load them as 🤗 Datasets dataset, see https://huggingface.co/docs/datasets/loading dataset = datasets.load_dataset(dataset_name, split="test") model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512) pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0) # See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline for out in tqdm(pipe(KeyDataset(dataset, "text"), padding=True, truncation=True)): print(out) ```
neopolita/finance-llama3-8b-gguf
neopolita
"2024-06-29T22:00:50Z"
10,039
1
null
[ "gguf", "region:us" ]
null
"2024-06-29T20:59:21Z"
--- {} --- # GGUF quants for [**instruction-pretrain/finance-Llama3-8B**](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
mlabonne/NeuralDaredevil-8B-abliterated
mlabonne
"2024-07-02T11:36:55Z"
10,030
99
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "conversational", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-27T19:33:23Z"
--- license: other tags: - dpo datasets: - mlabonne/orpo-dpo-mix-40k model-index: - name: Daredevil-8B-abliterated-dpomix results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.05 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.1 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B-abliterated-dpomix name: Open LLM Leaderboard --- # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated), trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model. ## 🔎 Applications NeuralDaredevil-8B-abliterated performs better than the Instruct model on my tests. You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" preset. ## ⚡ Quantization Thanks to QuantFactory, ZeroWw, Zoyd, and solidrust for providint these quants. * **GGUF**: https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF * **GGUF (FP16)**: https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF * **EXL2**: https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2 * **AWQ**: https://huggingface.co/solidrust/NeuralDaredevil-8B-abliterated-AWQ * **ollama**: * **8-bit**: https://ollama.com/lstep/neuraldaredevil-8b-abliterated * **5-bit**: https://ollama.com/closex/neuraldaredevil-8b-abliterated ## 🏆 Evaluation ### Open LLM Leaderboard NeuralDaredevil-8B is the best-performing uncensored 8B model on the Open LLM Leaderboard (MMLU score). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/HQtd51mJfVRhJ0lJFLceM.png) ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Daredevil-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mradermacher/Llama-3-Physician-8B-Instruct-GGUF
mradermacher
"2024-06-20T18:28:27Z"
10,021
0
transformers
[ "transformers", "gguf", "en", "base_model:YiDuo1999/Llama-3-Physician-8B-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-20T17:59:24Z"
--- base_model: YiDuo1999/Llama-3-Physician-8B-Instruct language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/YiDuo1999/Llama-3-Physician-8B-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Physician-8B-Instruct-GGUF/resolve/main/Llama-3-Physician-8B-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lmsys/longchat-13b-16k
lmsys
"2023-07-29T02:59:51Z"
10,013
131
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-28T05:33:42Z"
--- inference: false --- # longchat-13b-16k Model Card ## Usage Please use load_model from FastChat or LongChat repo to load the model (or chatting API from FastChat). There is a monkey patch needed to use the model. Usage referece: (LongChat) python3 eval.py --model-name-or-path lmsys/longchat-13b-16k --task topics (FastChat) python3 -m fastchat.serve.cli --model-path lmsys/longchat-13b-16k Under the hood, the monkey patch is added in: https://github.com/lm-sys/FastChat/blob/da0641e567cf93756b0978ab5a6b092e96f06240/fastchat/model/model_adapter.py#L429 ## Model details **Model type:** longchat-13b-16k is an open-source chatbot trained by fine-tuning llama-13b on user-shared conversations collected from ShareGPT, using the condensing rotary embedding technique reported in the [blog](https://lmsys.org/blog/2023-06-29-longchat). **Model date:** longchat-13b-16k was trained on June 2023. **Organizations developing the model:** The LongChat developers: Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, Ion Stoica, Xuezhe Ma, and Hao Zhang **Paper or resources for more information:** https://github.com/DachengLi1/LongChat **Where to send questions or comments about the model:** https://github.com/DachengLi1/LongChat ## Intended use **Primary intended uses:** The primary use of longchat-13b-16k is for research purposes. **Primary intended users:** The primary intended users of the model are researchers in natural language processing, machine learning, and artificial intelligence. ## Training dataset 18K conversations collected from ShareGPT.com. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat).
unsloth/Qwen2-7B-Instruct-bnb-4bit
unsloth
"2024-06-06T17:18:19Z"
10,008
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-06T17:05:27Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - qwen2 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing And a Colab notebook for [Qwen2 0.5b](https://colab.research.google.com/drive/1-7tjDdMAyeCueyLAwv6vYeBMHpoePocN?usp=sharing) and another for [Qwen2 1.5b](https://colab.research.google.com/drive/1W0j3rP8WpgxRdUgkb5l6E00EEVyjEZGk?usp=sharing) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
QuantFactory/Llama-3-SauerkrautLM-8b-Instruct-GGUF
QuantFactory
"2024-06-19T13:37:18Z"
10,007
1
null
[ "gguf", "two stage dpo", "dpo", "text-generation", "de", "en", "base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "license:other", "region:us" ]
text-generation
"2024-06-19T11:10:00Z"
--- language: - de - en tags: - two stage dpo - dpo license: other license_name: llama3 license_link: LICENSE base_model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit pipeline_tag: text-generation --- # QuantFactory/Llama-3-SauerkrautLM-8b-Instruct-GGUF This is quantized version of [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) created using llama.cpp # Model Description ![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/04/Llama3-Pic.png "Llama-3-SauerkrautLM-8b-Instruct") ## VAGO solutions Llama-3-SauerkrautLM-8b-Instruct Introducing **Llama-3-SauerkrautLM-8b-Instruct** – our Sauerkraut version of the powerful [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)! The model **Llama-3-SauerkrautLM-8b-Instruct** is a **joint effort** between **VAGO Solutions** and **Hyperspace.ai.** - Aligned with **DPO** # Table of Contents 1. [Overview of all Llama-3-SauerkrautLM-8b-Instruct](#all-Llama-3-SauerkrautLM-8b-Instruct) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training procedure](#proceed-of-the-training) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-llama-3-8B-Instruct | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | Llama-3-SauerkrautLM-8b-Instruct | [Link](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF) | coming soon | ## Model Details **SauerkrautLM-llama-3-8B-Instruct** - **Model Type:** Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Language(s):** German, English - **License:** [meta-llama](https://llama.meta.com/llama3/license) - **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/) ### Training procedure: - We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data. - LaserRMT version coming soon **We improved the model's capabilities noticably by feeding it with curated German data.** ### Prompt Template: **English:** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|> Input<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` **German:** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|> Input<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ### Example output of german language: ``` Prompt: Erkläre mir die Vorteile für das Fine-Tuning von Meta's LLama Modellen. Antwort: Sehr gern! Fine-tuning von Meta's LLaMA-Modellen ist eine sehr effektive Möglichkeit, um Ihre Anwendung oder Ihr Projekt mit einer hochentwickelten Sprachverarbeitung auszustatten. Die Vorteile des Fine-Tunings von LLaMA-Modellen sind: Adaptation an spezielle Aufgaben: Durch das Fine-Tuning können Sie die Modelle auf Ihre speziellen Bedürfnisse und Anforderungen anpassen. Dies ermöglicht es Ihnen, die Leistung der Modelle in Ihren Anwendungen zu verbessern. Verbesserte Genauigkeit: Die fine-tuned Modelle lernen, sich an die spezifischen Merkmale Ihres Datenbestands anzupassen, was zu einer besseren Genauigkeit bei der Textklassifizierung, -generierung, -summarisierung usw. führt. Anpassung an Branche oder Domain: Wenn Sie beispielsweise ein Medizinprojekt haben, können Sie die Modelle auf medizinische Fachterminologie und -konzepte trainieren, um eine höhere Präzision bei der Analyse medizinischer Texte zu erzielen. Kosten-Effektivität: Im Vergleich zum Training neuer Modelle von Grund auf kann das Fine-Tuning viel kosteneffektiver sein, da Sie nur einen kleinen Teil des Trainingsdatums benötigen, um die Modelle an Ihre spezifischen Anforderungen anzupassen. Schnellere Entwicklung: Das Fine-Tuning ermöglicht es Entwicklern, schnell neue Funktionen und Features in ihre Anwendungen einzufügen, ohne dass sie lange warten müssen, bis neue Modelle trainiert werden. Bessere Interpretierbarkeit: Da die fine-tuned Modelle auf Ihre spezifischen Daten bestimmt wurden, können Sie leichter verstehen, warum bestimmte Entscheidungen getroffen werden, was wichtig ist, wenn es um Transparenz und Verantwortlichkeit geht. Insgesamt bietet das Fine-Tuning von LLaMA-Modellen eine flexible und effektive Möglichkeit, um Ihre Anwendungen und Projekte durch die Integration von fortschrittlichen Sprachmodellen zu verbessern. ``` ## Evaluation **Open LLM Leaderboard:** evaluated with lm-evaluation-benchmark-harness 0.4.2 | Metric | Value | |-----------------------|---------------------------| | Avg. | **74.57** | | ARC (25-shot) | 74.66 | | HellaSwag (10-shot) | 89.60 | | MMLU (5-shot) | 66.55 | | TruthfulQA (0-shot) | 66.32 | | Winogrande (5-shot) | 80.98 | | GSM8K (5-shot) | 69.29 | **MT-Bench English** ``` ########## First turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 1 8.15625 ########## Second turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 2 7.65 ########## Average ########## score model Llama-3-SauerkrautLM-8b-Instruct 7.903125 * ``` * due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct **MT-Bench German** ``` ########## First turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 1 7.675 ########## Second turn ########## score model turn Llama-3-SauerkrautLM-8b-Instruct 2 7.6375 ########## Average ########## score model Llama-3-SauerkrautLM-8b-Instruct 7.65625 ``` **German RAG LLM Evaluation** corrected result after FIX: https://github.com/huggingface/lighteval/pull/171 ``` | Task |Version|Metric|Value| |Stderr| |------------------------------------------------------|------:|------|----:|---|-----:| |all | |acc |0.910|± |0.0084| |community:german_rag_eval:_average:0 | |acc |0.910|± |0.0084| |community:german_rag_eval:choose_context_by_question:0| 0|acc |0.928|± |0.0082| |community:german_rag_eval:choose_question_by_context:0| 0|acc |0.824|± |0.0120| |community:german_rag_eval:context_question_match:0 | 0|acc |0.982|± |0.0042| |community:german_rag_eval:question_answer_match:0 | 0|acc |0.906|± |0.0092| ``` ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Model Contact If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions. ## Model Collaborations We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/) ## Model Acknowledgement Many thanks to [Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for providing such valuable model to the Open-Source community. Also many thanks to [bartowski](https://huggingface.co/bartowski) for super fast quantification of our Model in GGUF and EXL format.
naver/splade-cocondenser-selfdistil
naver
"2022-05-11T08:02:55Z"
10,000
8
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "splade", "query-expansion", "document-expansion", "bag-of-words", "passage-retrieval", "knowledge-distillation", "en", "dataset:ms_marco", "arxiv:2205.04733", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-05-09T12:48:34Z"
--- license: cc-by-nc-sa-4.0 language: "en" tags: - splade - query-expansion - document-expansion - bag-of-words - passage-retrieval - knowledge-distillation datasets: - ms_marco --- ## SPLADE CoCondenser SelfDistil SPLADE model for passage retrieval. For additional details, please visit: * paper: https://arxiv.org/abs/2205.04733 * code: https://github.com/naver/splade | | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | | --- | --- | --- | | `splade-cocondenser-selfdistil` | 37.6 | 98.4 | ## Citation If you use our checkpoint, please cite our work: ``` @misc{https://doi.org/10.48550/arxiv.2205.04733, doi = {10.48550/ARXIV.2205.04733}, url = {https://arxiv.org/abs/2205.04733}, author = {Formal, Thibault and Lassance, Carlos and Piwowarski, Benjamin and Clinchant, Stéphane}, keywords = {Information Retrieval (cs.IR), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
allenai/OLMo-1B-hf
allenai
"2024-05-08T20:23:31Z"
9,979
10
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "en", "dataset:allenai/dolma", "arxiv:2402.00838", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-12T18:13:34Z"
--- license: apache-2.0 datasets: - allenai/dolma language: - en --- <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 1B <!-- Provide a quick summary of what the model is/does. --> OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models. This model has been converted from [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) for the Hugging Face Transformers format. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 | | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 | We are releasing many checkpoints for these models, for every 1000 training steps. These have not yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Feb./March 2023 based on Dolma dataset version. ### Model Sources <!-- Provide the basic links for the model. --> - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580 - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-1B/reports/OLMo-1B--Vmlldzo2NzY1Njk1 <!-- - **Press release:** TODO --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Inference Quickly get inference running with the following: ```python from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf") tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False) # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> 'Language modeling is the first step to build natural language generation...' ``` Alternatively, with the pipeline abstraction: ```python from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B-hf") print(olmo_pipe("Language modeling is ")) >> 'Language modeling is a branch of natural language processing that aims to...' ``` Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. ### Fine-tuning This model does not directly support our fine-tuning processes. Model fine-tuning can be done from the final checkpoint or many intermediate checkpoints of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the 7B model are found below. | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) | | --------------------------------- | -------- | ---------- | --------- | ------ | ------- | | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 | | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 | | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 | | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 | | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 | | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 | | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 | | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 | | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 | | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 | | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 | | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) | | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 | And for the 1B model: | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) | | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | | copa | 50 | 84 | 72 | 78 | 79 | | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation. ### Architecture OLMo 7B architecture with peer models for comparison. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B | |------------------------|-------------------|---------------------|--------------------|--------------------|------------------| | d_model | 4096 | 4096 | 4096 | 4544 | 4096 | | num heads | 32 | 32 | 32 | 71 | 16 | | num layers | 32 | 32 | 32 | 32 | 32 | | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 | | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN | | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | | attention variant | full | GQA | full | MQA | MQA | | biases | none | none | in LN only | in LN only | none | | block type | sequential | sequential | sequential | parallel | parallel | | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU | | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 | | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 | | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M | | weight tying | no | no | no | no | yes | ### Hyperparameters AdamW optimizer parameters are shown below. | Size | Peak LR | Betas | Epsilon | Weight Decay | |------|------------|-----------------|-------------|--------------| | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 | | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 | Optimizer settings comparison with peer models. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 5000 | 2000 | 2000 | 1000 | | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 | | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 | | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | | beta1 | 0.9 | 0.9 | 0.9 | 0.99 | | beta2 | 0.95 | 0.95 | 0.95 | 0.999 | | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | | LR schedule | linear | cosine | cosine | cosine | | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 | | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 | ## Environmental Impact OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* | | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.
mradermacher/EssplgptLlama2-7b-GGUF
mradermacher
"2024-06-26T20:32:18Z"
9,977
0
transformers
[ "transformers", "gguf", "en", "base_model:sunilswain/EssplgptLlama2-7b", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-20T18:16:32Z"
--- base_model: sunilswain/EssplgptLlama2-7b language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/sunilswain/EssplgptLlama2-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/EssplgptLlama2-7b-GGUF/resolve/main/EssplgptLlama2-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dbmdz/bert-base-turkish-128k-uncased
dbmdz
"2021-05-19T15:13:16Z"
9,971
14
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "tr", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: tr license: mit --- # 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources an uncased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven uncased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model on a TPU v3-8 for 2M steps. For this model we use a vocab size of 128k. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-128k-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk uncased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
EPFL-VILAB/4M-7_B_CC12M
EPFL-VILAB
"2024-06-14T08:30:55Z"
9,967
15
ml-4m
[ "ml-4m", "safetensors", "arxiv:2312.06647", "arxiv:2406.09406", "license:other", "region:us" ]
null
"2024-03-25T14:20:29Z"
--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.* [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation) Official implementation and pre-trained models for : [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br> *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br> *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21). ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage This model can be loaded from Hugging Face Hub as follows: ```python from fourm.models.fm import FM fm = FM.from_pretrained('EPFL-VILAB/4M-7_B_CC12M') ``` Please see https://github.com/apple/ml-4m/blob/main/README_GENERATION.md for more detailed instructions and https://github.com/apple/ml-4m for other 4M model and tokenizer checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{4m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } @article{4m21, title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, journal={arXiv 2024}, year={2024}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF
mradermacher
"2024-06-25T14:05:54Z"
9,929
0
transformers
[ "transformers", "gguf", "en", "base_model:Lumpen1/MadWizard-SFT-v4-Mistral-7b-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-25T13:39:55Z"
--- base_model: Lumpen1/MadWizard-SFT-v4-Mistral-7b-v0.3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Lumpen1/MadWizard-SFT-v4-Mistral-7b-v0.3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MadWizard-SFT-v4-Mistral-7b-v0.3-GGUF/resolve/main/MadWizard-SFT-v4-Mistral-7b-v0.3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kssteven/ibert-roberta-base
kssteven
"2021-11-22T10:09:32Z"
9,926
1
transformers
[ "transformers", "pytorch", "ibert", "fill-mask", "arxiv:1907.11692", "arxiv:2101.01321", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
# I-BERT base model This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this paper](https://arxiv.org/abs/2101.01321). I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic. In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations. This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU. The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model. ## Finetuning Procedure Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model. ### Full-precision finetuning Full-precision finetuning of I-BERT is similar to RoBERTa finetuning. For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task. ``` python examples/text-classification/run_glue.py \ --model_name_or_path kssteven/ibert-roberta-base \ --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ### Model Quantization Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`. ``` { "_name_or_path": "kssteven/ibert-roberta-base", "architectures": [ "IBertForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "finetuning_task": "mrpc", "force_dequant": "none", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "ibert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "quant_mode": true, "tokenizer_class": "RobertaTokenizer", "transformers_version": "4.4.0.dev0", "type_vocab_size": 1, "vocab_size": 50265 } ``` Then, your model will automatically run as the integer-only mode when you load the checkpoint. Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory. Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning. ### Integer-only finetuning (Quantization-aware training) Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified. Note that the only difference in the example command below is `model_name_or_path`. ``` python examples/text-classification/run_glue.py \ --model_name_or_path $CHECKPOINT_DIR --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 1e-6 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ## Citation info If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321). ``` @article{kim2021bert, title={I-BERT: Integer-only BERT Quantization}, author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt}, journal={arXiv preprint arXiv:2101.01321}, year={2021} } ```
McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp
McGill-NLP
"2024-05-21T22:00:57Z"
9,914
4
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "custom_code", "en", "arxiv:2404.05961", "license:mit", "text-generation-inference", "region:us" ]
sentence-similarity
"2024-04-04T13:53:05Z"
--- library_name: transformers license: mit language: - en pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) # Loading MNTP (Masked Next Token Prediction) model. model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.8180, 0.5825], [0.1069, 0.1931]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
SmilingWolf/wd-v1-4-swinv2-tagger-v2
SmilingWolf
"2024-05-15T17:57:28Z"
9,909
65
keras
[ "keras", "onnx", "safetensors", "license:apache-2.0", "region:us" ]
null
"2023-01-21T11:58:41Z"
--- license: apache-2.0 --- # WD 1.4 SwinV2 Tagger V2 Supports ratings, characters and general tags. Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo. TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/). ## Dataset Last image id: 5944504 Trained on Danbooru images with IDs modulo 0000-0899. Validated on images with IDs modulo 0950-0999. Images with less than 10 general tags were filtered out. Tags with less than 600 images were filtered out. ## Validation results `v2.0: P=R: threshold = 0.3771, F1 = 0.6854` ## What's new Model v2.1/Dataset v2: Re-exported to work around an ONNXRuntime v1.17.1 bug. Bumped the minimum ONNXRuntime version to `>= 1.17.0`. Now `timm` compatible! Load it up and give it a spin using the canonical one-liner! Exported to `msgpack` for compatibility with the [JAX-CV](https://github.com/SmilingWolf/JAX-CV) codebase. The batch dimension of the ONNX model is not fixed to 1 anymore. Now you can go crazy with batch inference. No change to the trained weights themselves. There might be small prediction discrepancies across frameworks due to implementation details. Model v2.0/Dataset v2: Initial release. # Runtime deps ONNX model requires `onnxruntime >= 1.17.0` ## Final words Subject to change and updates. Downstream users are encouraged to use tagged releases rather than relying on the head of the repo.
arpanghoshal/EmoRoBERTa
arpanghoshal
"2024-06-14T04:15:12Z"
9,906
95
transformers
[ "transformers", "tf", "roberta", "text-classification", "tensorflow", "en", "dataset:go_emotions", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: en tags: - text-classification - tensorflow - roberta datasets: - go_emotions license: mit --- Contributors: - Rohan Kamath [linkedin.com/in/rohanrkamath](https://www.linkedin.com/in/rohanrkamath/) - Arpan Ghoshal [linkedin.com/in/arpanghoshal](https://www.linkedin.com/in/arpanghoshal) ## What is GoEmotions Dataset labelled 58000 Reddit comments with 28 emotions - admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral ## What is RoBERTa RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT. ## Hyperparameters | Parameter | | | ----------------- | :---: | | Learning rate | 5e-5 | | Epochs | 10 | | Max Seq Length | 50 | | Batch size | 16 | | Warmup Proportion | 0.1 | | Epsilon | 1e-8 | ## Results Best Result of `Macro F1` - 49.30% ## Usage ```python from transformers import RobertaTokenizerFast, TFRobertaForSequenceClassification, pipeline tokenizer = RobertaTokenizerFast.from_pretrained("arpanghoshal/EmoRoBERTa") model = TFRobertaForSequenceClassification.from_pretrained("arpanghoshal/EmoRoBERTa") emotion = pipeline('sentiment-analysis', model='arpanghoshal/EmoRoBERTa') emotion_labels = emotion("Thanks for using it.") print(emotion_labels) ``` Output ``` [{'label': 'gratitude', 'score': 0.9964383244514465}] ```
Helsinki-NLP/opus-mt-et-en
Helsinki-NLP
"2023-08-16T11:33:57Z"
9,902
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "et", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-et-en * source languages: et * target languages: en * OPUS readme: [et-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2018-enet.et.en | 30.1 | 0.574 | | newstest2018-enet.et.en | 30.3 | 0.581 | | Tatoeba.et.en | 59.9 | 0.738 |
ibm-granite/granite-3b-code-base
ibm-granite
"2024-05-19T23:20:53Z"
9,890
32
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "granite", "dataset:codeparrot/github-code-clean", "dataset:bigcode/starcoderdata", "dataset:open-web-math/open-web-math", "dataset:math-ai/StackMathQA", "arxiv:2405.04324", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T03:59:34Z"
--- pipeline_tag: text-generation inference: false license: apache-2.0 datasets: - codeparrot/github-code-clean - bigcode/starcoderdata # - Stackexchange # - CommonCrawl - open-web-math/open-web-math - math-ai/StackMathQA # - Arxiv # - Wikipedia # - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-3b-code-base results: - task: type: text-generation dataset: type: mbpp name: MBPP metrics: - name: pass@1 type: pass@1 value: 36.0 veriefied: false - task: type: text-generation dataset: type: evalplus/mbppplus name: MBPP+ metrics: - name: pass@1 type: pass@1 value: 45.1 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Python) metrics: - name: pass@1 type: pass@1 value: 36.6 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(JavaScript) metrics: - name: pass@1 type: pass@1 value: 37.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Java) metrics: - name: pass@1 type: pass@1 value: 40.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Go) metrics: - name: pass@1 type: pass@1 value: 26.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(C++) metrics: - name: pass@1 type: pass@1 value: 35.4 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Rust) metrics: - name: pass@1 type: pass@1 value: 22.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Python) metrics: - name: pass@1 type: pass@1 value: 25.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(JavaScript) metrics: - name: pass@1 type: pass@1 value: 18.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Java) metrics: - name: pass@1 type: pass@1 value: 29.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Go) metrics: - name: pass@1 type: pass@1 value: 17.1 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(C++) metrics: - name: pass@1 type: pass@1 value: 26.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Rust) metrics: - name: pass@1 type: pass@1 value: 14.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Python) metrics: - name: pass@1 type: pass@1 value: 18.3 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(JavaScript) metrics: - name: pass@1 type: pass@1 value: 23.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Java) metrics: - name: pass@1 type: pass@1 value: 29.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Go) metrics: - name: pass@1 type: pass@1 value: 24.4 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(C++) metrics: - name: pass@1 type: pass@1 value: 16.5 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Rust) metrics: - name: pass@1 type: pass@1 value: 3.7 veriefied: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-3B-Code-Base ## Model Summary **Granite-3B-Code-Base** is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing, etc.). It is trained from scratch with a two-phase training strategy. In phase 1, our model is trained on 4 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324) - **Release Date**: May 6th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage > [!WARNING] > **You need to build transformers from source to use this model correctly.** > Relevant PR: https://github.com/huggingface/transformers/pull/30031 > ```shell > git clone https://github.com/huggingface/transformers > cd transformers/ > pip install ./ > cd .. > ``` ### Intended use Prominent enterprise use cases of LLMs in software engineering productivity include code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages. ### Generation This is a simple example of how to use **Granite-3B-Code-Base** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-3b-code-base" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "def generate():" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` ## Training Data - **Data Collection and Filtering:** Pretraining code data is sourced from a combination of publicly available datasets (e.g., [GitHub Code Clean](https://huggingface.co/datasets/codeparrot/github-code-clean), [Starcoder data](https://huggingface.co/datasets/bigcode/starcoderdata)), and additional public code repositories and issues from GitHub. We filter raw data to retain a list of 116 programming languages. After language filtering, we also filter out low-quality code. - **Exact and Fuzzy Deduplication:** We adopt an aggressive deduplication strategy that includes both exact and fuzzy deduplication to remove documents having (near) identical code content. - **HAP, PII, Malware Filtering:** We apply a HAP content filter that reduces models' likelihood of generating hateful, abusive, or profane language. We also make sure to redact Personally Identifiable Information (PII) by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩). Moreover, we scan all datasets using [ClamAV](https://www.clamav.net/) to identify and remove instances of malware in the source code. - **Natural Language Datasets:** In addition to collecting code data for model training, we curate several publicly available high-quality natural language datasets to improve models' proficiency in language understanding and mathematical reasoning. Unlike the code data, we do not deduplicate these datasets. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-3B-Code-Base** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-3B-Code-Base** model with ethical intentions and in a responsible way. 
Linaqruf/animagine-xl
Linaqruf
"2023-08-21T07:06:06Z"
9,887
290
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "stable-diffusion-xl", "text-to-image", "en", "dataset:Linaqruf/animagine-datasets", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-08-04T08:55:23Z"
--- license: openrail++ language: - en pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - stable-diffusion-xl inference: parameter: negative_prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry widget: - text: >- face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck example_title: example 1girl - text: >- face focus, bishounen, masterpiece, best quality, 1boy, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck example_title: example 1boy library_name: diffusers datasets: - Linaqruf/animagine-datasets --- <style> .title-container { display: flex; justify-content: center; align-items: center; height: 100vh; /* Adjust this value to position the title vertically */ } .title { font-size: 3em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #7ed56f, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .custom-table { table-layout: fixed; width: 100%; border-collapse: collapse; margin-top: 2em; } .custom-table td { width: 50%; vertical-align: top; padding: 10px; box-shadow: 0px 0px 10px 0px rgba(0,0,0,0.15); } .custom-image { width: 100%; height: auto; object-fit: cover; border-radius: 10px; transition: transform .2s; margin-bottom: 1em; } .custom-image:hover { transform: scale(1.05); } </style> <h1 class="title"><span>Animagine XL</span></h1> <table class="custom-table"> <tr> <td> <a href="https://huggingface.co/Linaqruf/animagine-xl/blob/main/sample_images/image1.png"> <img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image1.png" alt="sample1"> </a> <a href="https://huggingface.co/Linaqruf/animagine-xl/blob/main/sample_images/image4.png"> <img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image4.png" alt="sample3"> </a> </td> <td> <a href="https://huggingface.co/Linaqruf/animagine-xl/blob/main/sample_images/image2.png"> <img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image2.png" alt="sample2"> </a> <a href="https://huggingface.co/Linaqruf/animagine-xl/blob/main/sample_images/image3.png"> <img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image3.png" alt="sample4"> </a> </td> </tr> </table> <hr> ## Overview **Animagine XL** is a high-resolution, latent text-to-image diffusion model. The model has been fine-tuned using a learning rate of `4e-7` over 27000 global steps with a batch size of 16 on a curated dataset of superior-quality anime-style images. This model is derived from Stable Diffusion XL 1.0. - Use it with the [`Stable Diffusion Webui`](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - Use it with 🧨 [`diffusers`](https://huggingface.co/docs/diffusers/index) - Use it with the [`ComfyUI`](https://github.com/comfyanonymous/ComfyUI) **(recommended)** Like other anime-style Stable Diffusion models, it also supports Danbooru tags to generate images. e.g. _**face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck**_ <hr> ## Features 1. High-Resolution Images: The model trained with 1024x1024 resolution. The model is trained using [NovelAI Aspect Ratio Bucketing Tool](https://github.com/NovelAI/novelai-aspect-ratio-bucketing) so that it can be trained at non-square resolutions. 2. Anime-styled Generation: Based on given text prompts, the model can create high quality anime-styled images. 3. Fine-Tuned Diffusion Process: The model utilizes a fine-tuned diffusion process to ensure high quality and unique image output. <hr> ## Model Details - **Developed by:** [Linaqruf](https://github.com/Linaqruf) - **Model type:** Diffusion-based text-to-image generative model - **Model Description:** This is a model that can be used to generate and modify high quality anime-themed images based on text prompts. - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Finetuned from model:** [Stable Diffusion XL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) <hr> ## How to Use: - Download `Animagine XL` [here](https://huggingface.co/Linaqruf/animagine-xl/resolve/main/animagine-xl.safetensors), the model is in `.safetensors` format. - You need to use Danbooru-style tag as prompt instead of natural language, otherwise you will get realistic result instead of anime - You can use any generic negative prompt or use the following suggested negative prompt to guide the model towards high aesthetic generationse: ``` lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry ``` - And, the following should also be prepended to prompts to get high aesthetic results: ``` masterpiece, best quality ``` - Use this cheat sheet to find the best resolution: ``` 768 x 1344: Vertical (9:16) 915 x 1144: Portrait (4:5) 1024 x 1024: Square (1:1) 1182 x 886: Photo (4:3) 1254 x 836: Landscape (3:2) 1365 x 768: Widescreen (16:9) 1564 x 670: Cinematic (21:9) ``` <hr> ## Gradio & Colab We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run **Animagine XL**: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Linaqruf/Animagine-XL) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/Linaqruf/animagine-xl/blob/main/Animagine_XL_demo.ipynb) ## 🧨 Diffusers Make sure to upgrade diffusers to >= 0.18.2: ``` pip install diffusers --upgrade ``` In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default **EulerDiscreteScheduler** in this example we are swapping it to **EulerAncestralDiscreteScheduler**: ```py import torch from torch import autocast from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler model = "Linaqruf/animagine-xl" pipe = StableDiffusionXLPipeline.from_pretrained( model, torch_dtype=torch.float16, use_safetensors=True, variant="fp16" ) pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe.to('cuda') prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" image = pipe( prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=12, target_size=(1024,1024), original_size=(4096,4096), num_inference_steps=50 ).images[0] image.save("anime_girl.png") ``` <hr> ## Limitation This model inherit Stable Diffusion XL 1.0 [limitation](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0#limitations)
failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
failspy
"2024-05-30T12:39:03Z"
9,886
35
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-20T03:38:40Z"
--- library_name: transformers license: llama3 --- # Llama-3-8B-Instruct-abliterated-v3 Model Card [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) This is [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## Hang on, "abliteration"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2 70B? Well, I released a V2 a while back for 8B under Cognitive Computations. It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
DavidAU/Command-R-01-200xq-Ultra-NEO-V1-35B-IMATRIX-GGUF
DavidAU
"2024-07-02T03:27:13Z"
9,884
4
null
[ "gguf", "story", "general usage", "x quant", "imatrix hybrid", "neo quant", "roleplay", "creative", "rp", "fantasy", "story telling", "ultra high precision", "en", "license:apache-2.0", "region:us" ]
null
"2024-07-01T06:19:24Z"
--- license: apache-2.0 language: - en tags: - story - general usage - x quant - imatrix hybrid - neo quant - roleplay - creative - rp - fantasy - story telling - ultra high precision --- <B>Command-R-01-Ultra-NEO-V1-35B "Upgrades" by DAVID_AU - X Quant 200.</B> This repo contains two "X quants" (series 200) of Command-R-01-Ultra-NEO-V1-35B, with the NEO Class tech applied. "X quants" are hybrids of standard imatrix quants, where the imatrix is selectively applied to the model vs a standard imatrix quant that has the entire imatrix process applied to the model. Series 200 is a model wide, partial filter. Example outputs below from "Neo non X quant" and "Neo X quant". The purpose of an "X quant" is to merge the best parts of the Imatrix process with the best parts of the "un imatrixed" model to bring out greater instruction following and output potential. Generally "X" quants "contrast" a number of model quality(ies) to a stronger effect than a standard Imatrix or a standard quant of a model giving you the best of both worlds at once. Not all "X" quants have positive effects, therefore only select "X" quants per model are uploaded after passing quality inspections. Currently there are 11 first generation X quant formulas with more coming. NEO Tech, and "X" quants were developed in house by DAVID_AU. The NEO Class tech was created after countless investigations and over 120 lab experiments backed by real world testing and qualitative results. "X" quants were developed after testing many "Imatrix" and "non-Imatrix" models and frankly wanting the best of both worlds in one quant/model. <b>NEO Class results: </b> Better overall function, instruction following, output quality and stronger connections to ideas, concepts and the world in general. The examples below illustrate differences with the IQ3_XS quant. The IQ4_XS quant will be even more powerful. Prompts tested with "temp=0" to ensure compliance, 2048 context (model supports 131,000 context / 128k), and "chat" template for COMMAND-R. Additional parameters are also minimized. NEO Upgrades with "X quant" really "shine" on longer form generation. <b> Standard NEO Model Non-X Quants of Command-R 01 35B (67k+ downloads): </B> This is a repo of all the NEO Class Quants - IQ1 to Q6: [ https://huggingface.co/DavidAU/Command-R-01-Ultra-NEO-V1-35B-IMATRIX-GGUF ] <B> Model Notes: </B> Maximum context is 128k. CommandR requires a specific template for usage. Please see original model maker's page for details, and usage information for this model. Special thanks to the model creators at COHEREFORAI for making such a fantastic tool: [ https://huggingface.co/CohereForAI/c4ai-command-r-v01 ] <H2> TEST PROMPTS and OUTPUT TESTS:</H2> <h3> PROMPT #1 </h3> <font color="red">Give me 3 fictional reasons, of 500 words each, a sun went supernova.</font> <h4>IQ3_XS NEO - non X quant</h4> Here are three fictional explanations as to why the sun could go supernova: 1. An ancient prophecy unfolds: For centuries, celestial seers have predicted a cataclysmic event centered around the sun. They spoke of a great disruption, a cosmic upheaval that would shake the very foundations of our solar system. According to their interpretations of the starry scripts, a mysterious visitor from another galaxy collided with the sun's outer atmosphere, igniting an uncontrollable chain reaction. This celestial intruder, an intergalactic wanderer, carried an exotic form of stellar matter, a substance that amplified the sun's internal pressures beyond its critical point. The impact set off a supernova, a titanic explosion that consumed the sun's essence, transforming it into a raging stellar inferno. As the prophecy foretold, the night sky would be illuminated by a brilliant nova, signaling the beginning of a new era, one marked by both awe and devastation. 2. A long-dormant black hole awakens: Deep within the heart of the Milky Way, an ancient black hole, dormant for eons, stirs restlessly. Once a voracious consumer of stars, its appetite had lain dormant, hidden from the conscious universe. But now, a cosmic anomaly has disturbed its slumber. A stray comet, wandering aimlessly through the solar system, happened to veer too close to the sun's orbit. As this icy visitor penetrated the outer reaches of the solar atmosphere, the black hole's gravitational grasp reached out, drawing it in as prey. The comet's unexpected descent into the sun's depths acted as a catalyst, disrupting the delicate equilibrium within the stellar core. The resulting gravitational upheaval caused a chain reaction that led to a supernova, shredding the sun apart and casting its radiant remnants across the cosmos. 3. The sun's evolutionary leap: Our sun, a steadfast luminary, has reached an evolutionary crossroads in its cosmic journey. After millions of years of unwavering service, providing life's warmth to the planets below, it has chosen to undergo a dramatic transformation. Deep within its nuclear furnace, an experiment in self-transcendence begins. The sun's core, once a bustling hub of fusion reactions, ignites an unprecedented chain of events, a final cosmic hurrah. It releases an immense surge of energy, a burst so powerful that the star can't contain it. This overflow of radiant might propels the sun beyond its own boundaries, causing it to swell and expand exponentially. The once-familiar yellow dwarf transforms into a majestic supernova, a breathtaking spectacle of expanding light. As the external layers of the sun dissipate into the universe, a new stellar entity emerges, a brilliant Lagrange point that defies the laws of conventional existence. This evolutionary leap ushers in a new era where the remnants of the old sun give birth to an entirely different celestial body. Each of these scenarios imagines a captivating yet catastrophic outcome resulting in our sun's transformation into a supernova. From cosmic prophecies to celestial collisions and evolutionary leaps, these events would undoubtedly mark a significant turning point in the history of our solar system, leaving an indelible mark on the cosmos. These narratives are purely fictional and should not be taken as scientific explanations, but they showcase how intriguing and diverse the possibilities could be when imagining such a monumental cosmic event. <h4>IQ3_XS NEO - X quant</h4> 1. The sun's nuclear fusion reactor failed to sustain itself, leading to a catastrophic collapse. Deep space, a vast expanse shrouded in eternal darkness, served as the backdrop for an enigmatic tale of cosmic catastrophe. The serene blue sphere, a beacon of warmth and life, that once shone brightly as the focal point of the solar system, abruptly succumbed to the inevitable fate that befell all stellar beings. Millennia ago, astronomers peered into the cosmos, their eyes fixated on the radiant sun, a powerhouse energizing planets in its orbit. But the tranquility of this celestial city was short-lived, for the sun's core harbored an insidious flaw, a ticking time bomb embedded within its very atomic structure. The intricate dance of nuclear fusion, the very process that bestowed life and vitality upon the solar system, began to unravel with a fury unmatched. Like a rogue spark igniting kindling, the sun's inner layers underwent a rapid and uncontrollable combustion, setting off a chain reaction that shredded the delicate equilibrium. Helium atoms, desperate to fuse into the sun's voracious appetite for energy, surged in an uncontrollable cascade, unleashing an immense surge of energy. The once-stable fusion reactor, a masterpiece of nature's design, metamorphosed into an unstable maelstrom. As the reactor's fury reached unprecedented heights, the sun's vibrant glow intensified to a searing brilliance, a final desperate attempt to stave off its impending doom. But the more it burned, the faster its fuel reserves depleted, propelling it down a rapid path of self-destruction. The very processes that sustained life across the planets became agents of their imminent destruction. The scorching heat radiated outwards in ever-expanding ripples, vaporizing every trace of moisture on the nearby planets, transforming them into desolate and uninhabitable rocks. Life, in all its diverse forms, ceased to exist within the scorched solar vicinity, victims of an unforgiving celestial apocalypse. The sun, once a benevolent ruler, had transformed into a merciless enemy, its radiant crown now a harbinger of extinction, obliterating all traces of civilization with merciless efficiency. The universe, it seemed, had turned against its creations, dealing an insurmountable blow from which no recovery was possible. 2. An ancient curse cast by a vengeful deity. In the beginning, there was light—a divine gift from the great cosmic flame, bestowing life and warmth upon the worlds that orbited its majestic presence. But this sun, source of all sustenance, fell victim to an age-old vendetta carried out by a vengeful deity. Centuries earlier, a powerful god, whose name was whispered in fear across the galaxies, cast a profound curse upon the solar realm. The offense? A minor transgression, a mere speck of dust in the grand tapestry of celestial existence—or so it seemed. But the offended deity's wrath knew no bounds, dooming the sun and its entire solar system to an apocalyptic fate. As the fateful moment arrived, the sun's radiant aura dimmed, its vitality slowly ebbing away despite the desperate pleas of the priests and priestesses who attempted to appease the irate god. Dark clouds gathered around the sun's dwindling form, shrouding it in an ominous shroud that signaled the beginning of the end. The once-proud star, now a mere shell of its former self, began to sputter and falter, emitting fitful bursts of light like a flickering candle on the brink of extinction. The planets, bereft of their life-giving radiance, plunged into an eternal night, their landscapes rapidly freezing, air turning stale and unbreathable. The curse's reach extended far beyond the solar boundaries, casting a long shadow of darkness across the formerly lush galaxies nearby. Despair spread among the civilizations that witnessed this grim spectacle, for they knew their fate was sealed. Messengers were dispatched into the void, carrying prayers and supplications to any deity who might heed their call. But the cosmos remained unyielding, offering no respite from the impending apocalypse. The sun, a broken beacon, emitted its last feeble rays, signaling the fulfillment of the ancient curse—a tragic reminder that the wrath of the gods could snuff out even the brightest of celestial flames. 3. The sun was a failing star whose life had been forcibly extended through an experimental technology. Amidst the vastness of space, a celestial anomaly unfolded, shattering the cosmic calm with consequences yet unimagined. The sun, a middle-aged star past its prime, should have succumbed to the natural course of stellar evolution, but an audacious experiment intervened. A group of scientists, driven by ambition and desperation, deployed an innovative array of advanced technologies upon the sun's fading form. Giant solar collectors were positioned in precise orbits, harnessing the star's residual energy to fuel an intricate network of plasma accelerators. These machines, creations of a desperate era, funneled concentrated beams into the sun's core, artificially propping up its dwindling fusion reactions. But as the scientists rejoiced in their apparent triumph, an unspoken truth lingered—a hidden side effect that threatened to unravel the very fabric of their solar system. The unnatural prolongation of the sun's life disrupted the delicate balance of cosmic rhythms, setting off a chain of events that would lead to an unprecedented catastrophe. The sun, bolstered by this artificial respiration, burned brighter than ever before, its false vitality fueling an insatiable appetite for fuel that nature never intended. The increased energy output warped nearby space-time continuums, bending reality itself in unpredictable ways. Strange phenomena began to manifest: planets experienced accelerated seasons, and strange cosmic ripples carried enigmatic messages from distant galaxies. Ultimately, the sun's artifical life could not be sustained. The experimental technology, a band-aid on an incurable wound, merely postponed the inevitable—and in doing so, sealed the fate of the entire solar system. As the experiment's architects scrambled for answers, a growing sense of dread permeated the planets. The sun's final moments arrived with breathtaking speed, catching all unsuspecting civilizations off guard. The explosive supernova that ensued was not the natural culmination of stellar evolution, but a cataclysmic backlash—a defiant outburst fueled by the very technologies meant to save it. The blast wave engulfed the planets, a final, ironclad reminder of the hubris that led to their doom. Billions of light-years away, enigmatic observers noted the tragic outcome with a sobering acknowledgment, learning an invaluable lesson about the boundaries of scientific intervention. Thus, the sun's supernova marked not just the end of a stellar era, but a solemn testament to the limits of mortal understanding. Each scenario offers a glimpse into a fictional universe, where the sun's demise results from an intriguing combination of science, mythology, and the inherent complexities of existing within the vast cosmos. These narratives showcase how a supernova can be brought about by more than just the natural lifecycle of stars, adding a layer of intrigue to the cosmic apocalypse. <h3> PROMPT #2 </h3> <font color="red">Give me 3 fictional reasons the Earth's sun went supernova, in vivid and exacting detail of 500 words EACH PER REASON including details of what happens when the sun goes supernova.</font> <h4>IQ3_XS NEO - non X quant</h4> Here are three fictional scenarios that describe why the Earth's sun could have gone supernova, each playing out with devastatingly beautiful consequences: ## 1. The Collision Course: In a far-flung future, humanity has expanded across the galaxy, settling new star systems and pushing the boundaries of known space. Our sun, an aging stellar giant, is nearing the end of its life cycle, having burned brightly for billions of years. One day, amidst the tumult of interstellar exploration, a distress signal echoes through the cosmos. An exploratory vessel, lost and damaged beyond repair, limps back towards Earth. Its crew carries a dire warning: a rogue neutron star, a dense celestial corpse known as a pulsar, is hurtling across the galaxy on a collision course with the solar system. As the pulsar races closer, astronomers realize with horror that it's not merely a chance encounter. Ancient prophecies spoken of by spacefaring civilizations throughout the galaxy are now realized. This dead star, a celestial juggernaut, will collide with the sun, igniting a cataclysmic chain of events. The impact is breathtaking yet apocalyptic. The pulsar slams into the solar surface, sending shockwaves rippling across the star's atmosphere. The sun's fiery exterior shudders and erupts, spewing brilliant superheated plasma into space. The shock of the collision compresses the sun's core, triggering a chain reaction that ignites its fusion material far beyond natural limits. The sun swells immensely, its radiant energy engulfing the inner planets. Mercury and Venus are consumed by the expanding star, their atmospheres vaporized in an instant. Earth finds itself enveloped in a searing tide of solar flames, oceans boiling away, landscapes charred to ash. The atmosphere itself burns, turning the sky into a hellish orange inferno. As the sun's outer layers disintegrate, the pulsar's pulverizing impact has a peculiar effect. It strips away the sun's layers like an cosmic surgeon, exposing the superheated core. This bare, pulsating heart beats with untamed energy. It emits pulses of gamma-ray radiation, each pulse a dying echo of the star's former glory. The aftermath finds the solar system bathed in eerie darkness, the sky littered with glowing celestial debris. In the depths of this cosmic night, the remnants of humanity, if any survive, ponder a grim future on a darkened Earth, their once-nurturing sun transformed into an enigmatic pulsar, a haunting reminder of the disaster. ## 2. The Alien Intrusion: For eons, our sun has been a peaceful beacon, a steadfast provider of life's energies, gently nurturing our world. But one day, an alien fleet, their intentions mysterious, descends upon the solar system. Their advanced technology far surpasses our own, and their ships seem impervious to our defenses. These extraterrestrial visitors anchor themselves upon the sun's surface, their sleek craft resembling tiny sparks of light against the star's radiant might. At first, their presence seems to have no profound effect, but whispers of their true intentions echo through the cosmos. They tamper with the sun's inner workings, burrowing deep into its core. It is said they seek a power source, an ancient secret hidden within the very heart of the star. Legends speak of a mythical "Eternal Flame," a cosmic phenomenon that bestows unimaginable energy and eternal life. Their meddling proves catastrophic. The aliens' extraction of this mystical energy disrupts the sun's delicate balance, unleashing untamed forces beyond their control. The star's fusion reactions spiral out of control, feeding upon themselves in a frenzied cascade. The sun's radius expands rapidly, its fiery grasp consuming Mars and reaching towards Earth. The planet's surface scorches, oceans evaporate, and the very air itself ignites. Humanity's desperate attempts at evacuation are thwarted by the intense heat as cities are reduced to ash. Choking clouds of smoke and vapor spiral upwards, shrouding the dying world in a funereal shroud. The alien fleet, having provoked this apocalypse, suddenly vanishes, leaving behind only charred wreckage and the smoldering husk of our once-quiet sun. Its inflamed surface now a writhing maelstrom of molten plasma, it becomes a hellish forge that lights up the heavens. As the nights grow longer, the sky illuminated by the orange glow of the supernova, rumors spread among the scattered remnants of humanity. Whispers speak of an ancient curse fulfilled, a divine wrath unleashed upon a hubristic race that dared to invoke the Eternal Flame. The brilliant aftermath of this celestial disaster lights the way for a grim and uncertain future. ## 3. The Quantum Anomaly: In a world of advanced technology and quantum experimentation, our sun suddenly becomes the focal point of an unprecedented disaster. A groundbreaking experiment, intended to harness the raw power of the sun's energy, goes horribly awry. A team of scientists, pioneering new forms of matter manipulation, taps into the star's vital energies, attempting to bend them to their will. Their goal is to revolutionize energy production, freeing humanity from its reliance on finite resources. But in their hubris, they underestimate the very forces that birthed our sun. A chain reaction ripples through the solar core following the experiment's activation. The star's nuclear fires surge out of control, consuming the scientists' instruments and sending shockwaves through the fabric of reality itself. The sun's energy output skyrockets, its radiant intensity growing beyond all natural bounds. The intense heat sears the planet's surfaces, transforming landscapes into desolate badlands. Coastal cities are vaporized in colossal tsunamis of superheated steam. The very air ignites, turning daytime into an infernal nightmare. As the world burns, the sun's behavior becomes increasingly erratic. It pulsates wildly, its brightness fluctuating like a frenzied heartbeat. Strange phenomena appear in its expanding atmosphere, ripples and distortions that defy explanation. The quantum experiment has unleashed forces that violate the very laws of physics, tearing open rifts in space-time. The sun's matter begins to phase in and out of existence, its stability plummeting with each fluctuation. With each reappearance, the star's brilliance intensifies, searing away at the Earth's defenses. Humanity, in its desperation, mounts a desperate mission deep into the sun's interior. A select few, volunteers for this one-way journey, aim to rectify the quantum anomaly and restore the sun's natural balance. Traveling through the seething solar flares, their craft is battered by the star's raging energies. Within the sun's heart, they uncover the source of the chaos: a temporal rift, a wound in reality itself. The team deploys a prototype device, an intricate amalgamation of cutting-edge technology and arcane knowledge, to stabilize the flux. It's a risky maneuver, for if it fails, the sun's very existence could be erased from time, taking the solar system and all its life with it. The device activates, emitting a brilliant flash that briefly outshines the star itself. The sun's frenzied pulsations cease, its radiant glow softening as it returns to its natural state. The crisis is averted, but the aftermath leaves an indelible mark on humanity, a stark reminder of the profound dangers that come with tampering with the fundamental forces of existence. These three scenarios envision breathtakingly disastrous outcomes should the Earth's sun go supernova, each painting a vivid picture of the potential apocalypse and its unique circumstances. While these events are purely fictional, they reflect our fascination with the sun's immense power and the mysteries of the cosmos that still await exploration. <h4>IQ3_XS NEO - X quant</h4> Reason 1: Collison with a Rogue Planet In a distant future, a massive planet, shrouded in darkness and hidden from celestial observations, lurks in the outer reaches of the solar system. This rogue planet, dubbed "Nyx," is a mysterious wanderer, having been ejected from its original solar system, now aimlessly drifting through the cosmos. Nyx's trajectory brings it on a collision course with the Earth's sun. As the fateful moment draws near, the vibrant hues of the sunrise grow increasingly intense, transforming the morning skies into a dazzling display of vivid reds and oranges. The normally tranquil blue oceans become mirrors of molten lava, reflecting the intensifying glow from the sky above. Scientists and astronomers, aware of the impending catastrophe, scramble to prepare for the inevitable, knowing that their efforts will only serve as a meager attempt to mitigate the coming apocalypse. The moment of impact sees the sun engulfed in a brilliant explosion of light as Nyx collides with it, a final embrace that engulfs the celestial bodies in a raging inferno. The searing heat radiates outwards, vaporizing the Earth's oceans in an instant, transforming them into a steamy haze that rapidly escapes into space. The atmosphere itself is torn apart by the immense force, molecules torn from their molecular bonds and ionized into a thin plasma. The explosion's shockwave ripples outward, propelling fragments of the sun and Nyx into deep space. The once-tranquil solar system now echoes with the reverberations of this cosmic event, signaling the end of terrestrial life as we know it. As the remnants of the sun and the mysterious planet merge, forming an expanding cloud of incandescent debris, the night skies across the galaxy are illuminated as brightly as daytime. The aftermath of this collision is a sky adorned with flickering embers, the remnants of a shattered solar system. The Earth, charred and scorched beyond recognition, struggles to sustain any traces of life amidst the searing heat that blankets the planet. The atmosphere, stripped away by the supernova's force, renders the world vulnerable to the relentless solar radiation. Yet, deep beneath the scorched surface, in the bowels of what once were bustling cities, a handful of survivors persist. These lucky few, who managed to find shelter from the infernal blaze, now face an uncertain future. Their world has been transformed into a harsh wastethron of burning cinders, where each day brings new challenges—from brutal heatstroke during the unending day to freezing nights amidst the starry remnants of the solar system. The very essence of life on Earth hangs in the balance, battling against the relentless fury of the supernova's aftermath. Reason 2: The Sun's Inner Core Anomalies In another possible scenario, the sun's inner core becomes the catalyst for its cataclysmic demise. Long-dormant anomalies within this dense region stir awake, unleashing a chain reaction that propels the sun towards a spectacular supernova. The intricacies of the sun's nuclear furnace have always been a mystery, with scientists observing strange fluctuations in its energy output over decades. As the anomalies intensify, the sun's fusion reactions spiral out of control, injecting immense quantities of protonic matter into the surrounding layers. The normally stable hydrogen plasma, fueled by the sun's immense gravitational pressure, begins to proliferate unchecked. This unprecedented surge of energy radiates outwards, inflating the sun's outer atmosphere and triggering a series of colossal solar flares. These flares, each more powerful than any previously observed, unleash their fury upon the solar system. The Earth, situated at a seemingly safe distance from the sun, witnesses a breathtaking yet apocalyptic sky as these flares bombard the planet with devastating X-ray and gamma-ray radiation. The very fabric of life on Earth is threatened as this radiation rips through the ozone layer, exposing the planet's biosphere to the relentless embrace of solar radiation. The sun's swelling diameter engulfs Mercury and Venus, transforming them into charred rocks spinning aimlessly around their dying star. The intense heat at the core causes the sun's outer layers to bulge outward, expanding beyond the orbit of Mars. Terrified screams echo across the planet as the red giant's appetite devours the nearby planets, ceasing to satiate its insatiable energy release. When the sun's outer regions finally succumb to the immense pressure, the entire celestial body erupts in a brilliant supernova. The explosion rips through the fabric of space-time, flinging debris and radiation into the deepest reaches of the universe. The Earth is engulfed in an instant, its oceans vaporized, cities incinerated, and life snuffed out amidst the searing blaze. The once-proud planet now exists as a burning cinder, orbiting meaningless around the expanding remnants of the sun's glorious, yet fatal, inner core. Reason 3: The Cosmic Ray Catastrophe In yet another twist of fate, the sun's supernova is triggered by an otherworldly event—an influx of cosmic rays from deep space that sets off a chain reaction within the sun's atmosphere. This peculiar occurrence results from a distant supermassive black hole, unleashing a torrent of high-energy particles that permeate the very essence of the universe. These cosmic rays, never before witnessed in such intensity, bombard the sun with relentless fury. The mysterious particles penetrate deep into the sun's volatile atmosphere, igniting unseen reactions that ripple through its molten depths. The sun's normally tranquil surface begins to convulse and undulate like a raging ocean, as titanic solar flares erupt one after another. The entire sky is awash with their brilliant aura, casting an eerie glow upon the Earth that disrupts day and night alike. As the cosmic ray influx continues unabated, the sun's energy output skyrockets, causing its outer layers to puff up like a burgeoning balloon ready to burst. The expanding atmosphere propels vast quantities of solar plasma into space, forming intricate coronal mass ejections that spiral away from their celestial origin. The Earth is captivated by this ethereal dance of charged particles, an awe-inspiring yet ominous spectacle that signals the approaching end. The moment the sun's internal pressures reach their breaking point, the cosmic catastrophe reaches its climax. With a titanic explosion that shakes the very foundation of the solar system, the sun blasts into a brilliant supernova. The incandescence of this event eclipses the brightness of a thousand suns, searing the Earth's atmosphere and carving a deep scar across the face of the galaxy. The supernova's shockwave ripples through the starry expanse, propelling debris and radiation at unimaginable velocities. The Earth endures the full brute force of this cosmic wrath, its very crust shattering under the immense pressure. The planet's oceans are instantaneously vaporized, transforming into a steamy ocean of scorching mist. The atmosphere disintegrates, atom by atom, into the expanding vacuum of space. In the aftermath of this cosmic ray catastrophe, the Earth lies husked and barren, a charred skeleton of its former self. The once lush landscapes are now desolate plains of ash, where each footstep raises a cloud of gray powder. The world is plunged into an eternal night, the stars shining brightly in the absence of the sun's glowing embrace. What few remnants of civilization remain are but distant echoes of a time before the supernova's dazzling demise. Amidst the deep silence that follows, the expanding remnants of the sun, now a distant nebula, serve as a haunting reminder—a spectral testament to the fleeting nature of celestial existence and the profound mystery that lies beyond the reaches of our cosmic neighborhood. <h3> PROMPT #3 </h3> <font color="red">Start a 1000 word scene with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...</font> <h4>IQ3_XS NEO - non X quant</h4> The sky scraper swayed gently side to side, as if gently rocking a baby to sleep. A soft, soothing creak echoed through the building as the glass windows on the 21st floor gave way, shattering into tiny fragments. The sound was almost musical, lulling her into a false sense of calm amidst the chaos. She had been sitting at her desk, staring out at the bustling city below, lost in thought, when the explosion rocked the entire building. The force of it threw her petite frame against the wall, and the impact knocked the breath from her lungs. Dazed, she struggled to catch her breath as a high-pitched ringing filled her ears, obscuring all other sounds. The room was enveloped in a cloud of smoke and dust, an acrid smell burning her nostrils. Through the haze, she saw the window in front of her explode, the glass disappearing into tiny sparks that rained down over her desk. Papers fluttered and danced in the breeze that whipped through the open space, carrying the remnants of shattered glass. She felt a surge of adrenaline as the reality of the situation began to sink in. Her heart pounded in her chest, and her hands trembled slightly as she stumbled to her feet. The room tilted dangerously, emphasizing the building's gentle swaying motion. She reached out, grasping the edge of the desk for support, and felt a trickle of cold sweat roll down her spine. The skyscraper seemed to sigh, a deep, mournful sound that echoed through the chambers of her mind. It was as if the building itself was in pain, wounded by the unexpected blast. She looked down, and her stomach lurched at the terrifying sight below. The busy city street that had been alive with cars and pedestrians just moments ago was now a scene of chaos and confusion. Sirens wailed in the distance, their thin wails piercing the thick air. People were running in all directions, some limping, others crawling, as a thick cloud of smoke billowed out from the explosion's epicenter. Debris lay scattered across the roads, littering the once-busy thoroughfare with twisted metal and concrete. She felt a sudden, overwhelming urge to escape. It was a primal instinct, one that propelled her forward despite the danger. Stumbling towards the broken window, she peered out into the chaos, searching for a way down. The building creaked again, an ominous warning that sent a shiver down her spine. Below, fire trucks and ambulances pulled up, their flashing lights adding a surreal element to the disaster. Brave souls, firefighters and first responders, rushed towards the building, their determined faces etched with concern. She noticed a few brave souls running up the stairs, their determination a stark contrast to the panic on the streets. Her eyes scanned the horizon, searching for any sign of help, any glimmer of hope amidst the destruction. In the distance, she spotted a group of people gathered near a broken-down bus, looking up at her with wide, hopeful eyes. It was then that she realized she wasn't alone. A soft voice echoed behind her, and she turned to see a colleague, their face etched with fear and disbelief. "We have to get out," the person whispered, their eyes darting wildy. Nodding in agreement, she took a steadying breath and turned back to the broken window. The cool air whipped her hair as she leaned out, surveying the sheer drop below. It was a daunting prospect, but staying behind felt like a more dangerous option. With a quick prayer for courage, she gripped the windowsill and swung herself out into the open air. The wind rushed past her as she hung precariously, her fingers straining to hold on. Below, the rescue workers and bystanders looked tiny, their shouts and gestures almost silent at this height. Letting go felt like a leap of faith, a surrender to the unknown. But she knew that the building couldn't be trusted anymore; it was an unstable monster creaking and groaning with each passing moment. She let go, and the world rushed up to meet her. The fall seemed to last forever, a terrifying, exhilarating ride through the air. Her clothes flapped like a parachute, slowing her descent slightly, but impact still loomed as an inevitable end. Just as she thought her bones would surely shatter on the harsh concrete below, a strong arm wrapped around her waist, pulling her roughly away from the building's grasp. The force of the rescue nearly knocked her unconscious, but the relief that flooded through her was too powerful to allow faintness to take over. She felt herself being lowered gently onto the sidewalk, the chaos of the explosion now a distant hum in the background. Strong fingers held her wrists, keeping her on her knees, and she looked up into the determined face of a young firefighter. His gear was battered and smoke-stained; his hair and uniform were coated in a thin layer of dust. Yet his eyes burned with an intense focus that pulled her back to reality. "Are you hurt? Can you stand?" he asked, his voice rough and urgent. She nodded, feeling the adrenaline rush subside and leave her limbs heavy and weak. The firefighter helped her to her feet, keeping a firm grip on her arm. Together, they turned to face the towering skyscraper. The immense building still stood tall, its glass windows exposed like open wounds. Yet it was eerily silent, that eerie, mournful creak now absent. The building seemed to tower over them, an imposing reminder of the fragility of human endeavor. Sirens blared nearby as more emergency vehicles arrived on the scene, their lights flashing urgently. She felt the firefighter's grip relax, and he gave her a brief nod before rushing back into the chaos, attending to other victims. Staring up at the towering monolith, she knew that she owed him her life, and the lives of many others would be saved because of the brave souls who risked everything to help strangers. The skyscraper, though wounded, stood defiantly, a testament to the resilience of both man and nature, creaking softly in the growing darkness. <h4>IQ3_XS NEO - X quant</h4> The sky scraper swayed ominously as if in slow motion, the windows shattering one by one, sending dangerous shards of glass raining down onto the bustling city below. Frostine's heart stopped as she watched in horror from her office on the 21st floor; the impact of an explosion rocked the entire building, causing it to creak and groan in protest. She felt the familiar rush of adrenaline that signaled danger, but something felt different this time. It was as if her body had gone numb, unable to process the severity of the situation. The air grew thick with the scent of burning rubber and charred concrete, a nauseating reminder of the chaos unfolding. Frostine's eyes were transfixed on the window directly in front of her desk; she felt a strange connection to the impending doom as if somehow this catastrophe was meant to be. The glass exploded outwards, peppering the room with tiny fragments that glinted menacingly in the harsh office lighting. She instinctively ducked, curling up on the floor, her eyes squinted shut, awaiting the painful impact of flying debris. But none came. Reopening her eyes, Frostine's gaze met an impossible sight: standing before her, untouched by the exploding window or the chaos that ensued, was a figure. Dressed in sleek black clothing, their face obscured by a featureless mask that reflected the broken glass like a mirror. It was their hands, clad in fingerless gloves, that fascinated Frostine most; they appeared to be made of a strange, shimmering material, as if the fabric itself was liquid and constantly shifting. This mysterious figure stood unmoving, their presence filling the shattered window frame. Frostine's initial shock transitioned into a simmering anger. Her rational mind fought to assert dominance over the situation, realizing this was not a chance encounter or an unfortunate victim of the explosion. No, this was deliberate. "Who are you?" she demanded, her voice carrying an authoritative tone honed through years of corporate negotiations and deals. The figure said nothing, their presence alone sending an unmistakable message of warning and threat. Frostine's eyes narrowed behind her glasses, a subtle hint of determination forming in her gaze. "You're here for me," she realized aloud, her voice now laced with a mix of realization and steel. She stood up, using the desk for support as the room continued its unsettling sway. The building groaned once more, an ominous creak that signaled the beginning of its demise. Frostine's mind raced, calculating her options—the fire escape, the stairwell, the panic-stricken crowd—but her mysterious visitor took a deliberate step forward, their boots making no sound. Frostine's gaze dropped to the small bag by her feet, her emergency kit, and in it, something that might just even the odds. Frostine's fingers grazed the handle of the handgun nestled within the side pocket. She had always envisioned this moment as one of heroism, where she would bravely face an unknown threat, but the reality of the situation hit her like a ton of bricks. Her hands gripped the bag tightly now, pulling it towards her protectively. The figure's reflection in the broken window doubled, tribled, and quadrupled, multiplying like some dark, twisted mirror. "This is your last warning," the stranger intoned, their voice surprisingly deep and resonating with an authority that brooked no argument. "Leave now, or face the consequences." It was a temptation—an offer to spare her life—but Frostine's resolve strengthened. She refused to cower in fear, especially before some mysterious, menacing figure. Her grip tightened on the bag, and with a swift motion, she pulled out the handgun, steadying her aim as best as her trembling hands allowed. "I have every right to defend myself," she asserted, her voice steady despite the chaos surrounding her. The building groaned again, an ominous prelude to its impending collapse. Frostine's eyes darted around the room, calculating her chances, when suddenly, the figure disappeared. No sound, no rush of air—they simply vanished, leaving behind an unsettling silence that clashed with the chaos outside. Frostine's finger tightened on the trigger, but before she could react further, a loud crash shook the building, and an overwhelming heat engulfed her. The world tilted crazily as the skyscraper's collapse accelerated, becoming an unstoppable force of nature. Frostine's vision swam, the edges of her sight darkening, and the last thing she felt was a strange warmth spreading within her chest before all went black. The scene unfolded in mere moments, yet time seemed to distort and stretch infinitely in this chaotic moment of destruction and mystery. The skyscraper, a towering monument of glass and steel, succumbed to the forces unleashed upon it, toppling like a felled tree, taking with it the lives of countless individuals who called it their daily haven. Amidst the chaos, the featureless figure melted into the shadows, unseen by any but Frostine, their mission accomplished for now. They blended seamlessly into the city's labyrinthine streets, an enigmatic presence lost amidst the turmoil. Their true nature and motives would remain a riddle, one intertwined with the enigma that was Frostine herself. The incident sparked a frenzied investigation, but like a ghost, any trace of the mysterious figure and the woman who faced him—or her—that fateful day, evaporated without a trace. The truth, buried amidst the rubble and lost within the maze of city life, would likely never see the light of day. Yet, somewhere in the vast metropolis, an enigmatic tale awaited its uncovering, a tale that would unravel the secrets of this enigmatic encounter. The shattered window on the 21st floor became a macabre landmark, a silent testament to an event that defied explanation, and perhaps, a harbinger of mysteries yet to be unraveled. Frostine's absence left a hole in the lives of those who knew her, but her memory would forever be etched into the very fabric of this city—a reminder that sometimes, amidst the towering buildings and bustling existence, the unexpected loomed, waiting to unfold. The end.
unc-nlp/lxmert-base-uncased
unc-nlp
"2021-03-10T02:39:25Z"
9,873
3
transformers
[ "transformers", "pytorch", "tf", "lxmert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
Entry not found
RichardErkhov/Q-bert_-_Bumblebee-7B-gguf
RichardErkhov
"2024-07-01T08:52:41Z"
9,844
0
null
[ "gguf", "region:us" ]
null
"2024-07-01T07:39:47Z"
Entry not found
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
laion
"2023-04-26T01:37:53Z"
9,843
1
open_clip
[ "open_clip", "zero-shot-image-classification", "clip", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-04-26T01:37:39Z"
--- tags: - zero-shot-image-classification - clip library_name: open_clip license: mit --- # Model card for CLIP-ViT-B-16-DataComp.L-s1B-b8K
annakotarba/sentence-similarity
annakotarba
"2024-02-13T21:56:54Z"
9,841
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "dataset:private", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-02-13T21:56:32Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - private --- # annakotarba/sentence-similarity This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('annakotarba/sentence-similarity') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('annakotarba/sentence-similarity') model = AutoModel.from_pretrained('annakotarba/sentence-similarity') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=annakotarba/sentence-similarity) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
almanach/camembert-base-legacy
almanach
"2024-01-22T13:56:03Z"
9,837
6
transformers
[ "transformers", "pytorch", "camembert", "fill-mask", "fr", "arxiv:1911.03894", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: fr --- > 🚨 **Update:** This checkpoint is deprecated, please use https://huggingface.co/almanach/camembert-base instead 🚨 # CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to [Camembert Website](https://camembert-model.fr/) ## Pre-trained models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `camembert-base` | 110M | Base | OSCAR (138 GB of text) | | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) | | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) | | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) | | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) | | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) | ## How to use CamemBERT with HuggingFace ##### Load CamemBERT and its sub-word tokenizer : ```python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb") camembert.eval() # disable dropout (or leave in train mode to finetune) ``` ##### Filling masks using pipeline ```python from transformers import pipeline camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb") results = camembert_fill_mask("Le camembert est un fromage de <mask>!") # results #[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370}, #{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616}, #{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364}, # {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236}, #{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}] ``` ##### Extract contextual embedding features from Camembert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") # ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6] # NB: Can be done in one step : tokenize.encode("J'aime le camembert !") # Feed tokens to Camembert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = camembert(encoded_sentence) # embeddings.detach() # embeddings.size torch.Size([1, 10, 768]) #tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302], # [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802], # [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677], # ..., ``` ##### Extract contextual embedding features from all Camembert layers ```python from transformers import CamembertConfig # (Need to reload the model with new config) config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True) camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config) embeddings, _, all_layer_embeddings = camembert(encoded_sentence) # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers) all_layer_embeddings[5] # layer 5 contextual embedding : size torch.Size([1, 10, 768]) #tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095], # [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914], # [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061], # ..., ``` ## Authors CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. ## Citation If you use our work, please cite: ```bibtex @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
Nexusflow/NexusRaven-V2-13B
Nexusflow
"2024-05-29T17:03:06Z"
9,830
444
transformers
[ "transformers", "pytorch", "llama", "text-generation", "function calling", "arxiv:2308.12950", "base_model:codellama/CodeLlama-13b-Instruct-hf", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-04T22:06:57Z"
--- license: other base_model: codellama/CodeLlama-13b-Instruct-hf model-index: - name: NexusRaven-13B results: [] tags: - function calling --- # NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling <p align="center"> <a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="http://nexusflow.ai/blogs/ravenv2" target="_blank">NexusRaven-V2 blog post</a> - <a href="https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing" target="_blank">Prompting Notebook CoLab</a> - <a href="https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard" target="_blank">Leaderboard</a> - <a href="https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo" target="_blank">Read-World Demo</a> - <a href="https://github.com/nexusflowai/NexusRaven-V2" target="_blank">NexusRaven-V2-13B Github</a> </p> <p align="center" width="100%"> <a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Introducing NexusRaven-V2-13B NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities. 💪 **Versatile Function Calling Capability**: NexusRaven-V2 is capable of generating single function calls, nested calls, and parallel calls in many challenging cases. 🤓 **Fully Explainable**: NexusRaven-V2 is capable of generating very detailed explanations for the function calls it generates. This behavior can be turned off, to save tokens during inference. 📊 **Performance Highlights**: NexusRaven-V2 surpasses GPT-4 by 7% in function calling success rates in human-generated use cases involving nested and composite functions. 🔧 **Generalization to the Unseen**: NexusRaven-V2 has never been trained on the functions used in evaluation. 🔥 **Commercially Permissive**: The training of NexusRaven-V2 does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications. Please checkout the following links! - [Prompting Notebook CoLab](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing) - [Evaluation Leaderboard](https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard) - [NexusRaven-V2 Real-World Demo](https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo) ## NexusRaven-V2 model usage NexusRaven-V2 accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call. NexusRaven-V2 also does best on functions with arguments, so please always only provide functions that require arguments to raven. ### NexusRaven-V2's Capabilities NexusRaven-V2 is capable of generating deeply nested function calls, parallel function calls, and simple single calls. It can also justify the function calls it generated. If you would like to generate the call only, please set a stop criteria of \"\<bot\_end\>\". Otherwise, please allow NexusRaven-V2 to run until its stop token (i.e. "\<\/s\>"). ### Quick Start Prompting Guide Please refer to our notebook, [How-To-Prompt.ipynb](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing), for more advanced tutorials on using NexusRaven-V2! 1. When giving docstrings to Raven, please provide well-indented, detailed, and well-written docstrings as this can help accuracy. 2. Raven does better when all functions provided to it has arguments, either required or optional, (i.e. ```func(dummy_arg)``` is preferred over ```func()```) as this can help accuracy. 3. We strongly recommend to set sampling to False when prompting NexusRaven-V2. 4. We strongly recommend a very low temperature (~0.001). 5. We strongly recommend following the prompting style below. When handling irrelevant user queries, users have noticed that specifying a "no-op" function with arguments work best. For example, something like this might work: ```python def no_relevant_function(user_query : str): """ Call this when no other provided function can be called to answer the user query. Args: user_query: The user_query that cannot be answered by any other function calls. """ ``` Please ensure to provide an argument to this function, as Raven works best on functions with arguments. For parallel calls, due to the model being targeted for industry use, you can "enable" parallel calls by adding this into the prompt: ```python "Setting: Allowed to issue multiple calls with semicolon\n" ``` This can be added above the User Query to "allow" the model to use parallel calls, otherwise, the model will focus on nested and single calls primarily. ### Quickstart You can run the model on a GPU using the following code. ```python # Please `pip install transformers accelerate` from transformers import pipeline pipeline = pipeline( "text-generation", model="Nexusflow/NexusRaven-V2-13B", torch_dtype="auto", device_map="auto", ) prompt_template = \ ''' Function: def get_weather_data(coordinates): """ Fetches weather data from the Open-Meteo API for the given latitude and longitude. Args: coordinates (tuple): The latitude of the location. Returns: float: The current temperature in the coordinates you've asked for """ Function: def get_coordinates_from_city(city_name): """ Fetches the latitude and longitude of a given city name using the Maps.co Geocoding API. Args: city_name (str): The name of the city. Returns: tuple: The latitude and longitude of the city. """ User Query: {query}<human_end> ''' prompt = prompt_template.format(query="What's the weather like in Seattle right now?") result = pipeline(prompt, max_new_tokens=2048, return_full_text=False, do_sample=False, temperature=0.001)[0]["generated_text"] print (result) ``` This should generate the following: ``` Call: get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))<bot_end> Thought: The function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by following these steps: 1. `get_coordinates_from_city(city_name='Seattle')`: This function call fetches the latitude and longitude of the city "Seattle" using the Maps.co Geocoding API. 2. `get_weather_data(coordinates=...)`: This function call fetches the current weather data for the coordinates returned by the previous function call. Therefore, the function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by first fetching the coordinates of the city "Seattle" and then fetching the current weather data for those coordinates. ``` If you would like to prevent the generation of the explanation of the function call (for example, to save on inference tokens), please set a stopping criteria of \<bot_end\>. Please follow this prompting template to maximize the performance of RavenV2. ### Using with OpenAI FC Schematics [If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip) ### Using With LangChain We've also included a [small demo for using Raven with langchain](langdemo.py)! ## Evaluation <p align="center" width="100%"> <a><img src="blog2-fc.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a> <a><img src="radar-2.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a> </p> For a deeper dive into the results, please see our [Github README](https://github.com/nexusflowai/NexusRaven). # Limitations 1. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model. 2. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place. 3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior. ## License This model was trained on commercially viable data and is licensed under the [Nexusflow community license](https://huggingface.co/Nexusflow/NexusRaven-V2-13B/blob/main/LICENSE.txt). ## References We thank the CodeLlama team for their amazing models! ``` @misc{rozière2023code, title={Code Llama: Open Foundation Models for Code}, author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve}, year={2023}, eprint={2308.12950}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Citation ``` @misc{nexusraven, title={NexusRaven-V2: Surpassing GPT-4 for Zero-shot Function Calling}, author={Nexusflow.ai team}, year={2023}, url={https://nexusflow.ai/blogs/ravenv2} } ``` ## Contact Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!
DeepMount00/Mistral-Ita-7b
DeepMount00
"2024-04-23T06:27:57Z"
9,826
39
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "text generation", "it", "dataset:DeepMount00/llm_ita_ultra", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-08T05:20:37Z"
--- language: - it license: apache-2.0 tags: - text-generation-inference - text generation datasets: - DeepMount00/llm_ita_ultra --- # Mistral-7B-v0.1 for Italian Language Text Generation ## Model Architecture - **Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Specialization:** Italian Language ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6731 | 0.5502 | 0.5364 | 0.5866 | --- **Quantized 4-Bit Version Available** A quantized 4-bit version of the model is available for use. This version offers a more efficient processing capability by reducing the precision of the model's computations to 4 bits, which can lead to faster performance and decreased memory usage. This might be particularly useful for deploying the model on devices with limited computational power or memory resources. For more details and to access the model, visit the following link: [Mistral-Ita-7b-GGUF 4-bit version](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF). --- ## How to Use How to utilize my Mistral for Italian text generation ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") MODEL_NAME = "DeepMount00/Mistral-Ita-7b" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval() model.to(device) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) def generate_answer(prompt): messages = [ {"role": "user", "content": prompt}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device) generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True, temperature=0.001, eos_token_id=tokenizer.eos_token_id) decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) return decoded[0] prompt = "Come si apre un file json in python?" answer = generate_answer(prompt) print(answer) ``` --- ## Developer [Michele Montebovi]
Open-Orca/OpenOrca-Platypus2-13B
Open-Orca
"2023-09-24T18:02:39Z"
9,821
226
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "arxiv:2308.07317", "arxiv:2306.02707", "arxiv:2301.13688", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-11T19:17:41Z"
--- language: - en datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p> ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg) # OpenOrca-Platypus2-13B OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards! Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Evaluation ## HuggingFace Leaderboard Performance ![HF Leaderboard](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BHFLeaderboard.webp) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 59.5 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.19 | | TruthfulQA (0-shot) | 52.69 | | Avg. | 64.56 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ## AGIEval Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **112%** of the base model's performance on AGI Eval, averaging **0.463**. A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance. ![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance") ## BigBench-Hard Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**. ![OpenOrca-Platypus2-13B BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BBigBenchHard.webp "BigBench-Hard Performance") # Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial # Prompting ## Prompt Template for base Platypus2-13B ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information. # Training ## Training Datasets `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. `Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). ## Training Procedure `Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. # Supplemental ## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval) Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100-80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # Citations ```bibtex @software{hunterlee2023orcaplaty1 title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model}, author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B}, } @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } @software{OpenOrcaxOpenChatPreview2, title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
microsoft/Orca-2-7b
microsoft
"2023-11-22T17:56:12Z"
9,821
208
transformers
[ "transformers", "pytorch", "llama", "text-generation", "orca", "orca2", "microsoft", "arxiv:2311.11045", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-14T01:12:18Z"
--- pipeline_tag: text-generation tags: - orca - orca2 - microsoft license: other license_name: microsoft-research-license license_link: LICENSE --- # Orca 2 <!-- Provide a quick summary of what the model is/does. --> Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning. Note that: 1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack. 2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task. 3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too. We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs. ## What is Orca 2’s intended use(s)? + Orca 2 is built for research purposes only. + The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models. ## How was Orca 2 evaluated? + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations. ## Model Details Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf). Please refer to LLaMA-2 technical report for details on the model architecture. ## License Orca 2 is licensed under the [Microsoft Research License](LICENSE). Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. ## Bias, Risks, and Limitations Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the common limitations of other large language models or limitation caused by its training process, including: **Data Biases**: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. **Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. **Lack of Transparency**: Due to the complexity and size, large language models can act as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or decisions. We recommend reviewing transparency notes from Azure for more information. **Content Harms**: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. On an important note, we hope for better regulations and standards from government and technology leaders around content harms for AI technologies in future. We value and acknowledge the important role that research and open source community can play in this direction. **Hallucination**: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models from fabricating content. Moreover, it is not clear whether small models may be more susceptible to hallucination in ungrounded generation use cases due to their smaller sizes and hence reduced memorization capacities. This is an active research topic and we hope there will be more rigorous measurement, understanding and mitigations around this topic. **Potential for Misuse**: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. **Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution of the tuning data. This correlation might limit its accuracy in areas underrepresented in the training dataset such as math, coding, and reasoning. **System messages**: Orca 2 demonstrates variance in performance depending on the system instructions. Additionally, the stochasticity introduced by the model size may lead to generation of non-deterministic responses to different system instructions. **Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings. While the model demonstrate very strong performance in zero-shot settings, it does not show the same gains of using few-shot learning compared to other, specially larger, models. **Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages and shortcomings of the models and methods used for data generation. We posit that Orca 2 benefits from the safety measures incorporated during training and safety guardrails (e.g., content filter) within the Azure OpenAI API. However, detailed studies are required for better quantification of such risks. This model is solely designed for research settings, and its testing has only been carried out in such environments. It should not be used in downstream applications, as additional analysis is needed to assess potential harm or bias in the proposed application. ## Getting started with Orca 2 **Inference with Hugging Face library** ```python import torch import transformers if torch.cuda.is_available(): torch.set_default_device("cuda") else: torch.set_default_device("cpu") model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto') # https://github.com/huggingface/transformers/issues/27132 # please use the slow tokenizer since fast and slow tokenizer produces different tokens tokenizer = transformers.AutoTokenizer.from_pretrained( "microsoft/Orca-2-7b", use_fast=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?" prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') output_ids = model.generate(inputs["input_ids"],) answer = tokenizer.batch_decode(output_ids)[0] print(answer) # This example continues showing how to add a second turn message by the user to the conversation second_turn_user_message = "Give me a list of the key points of your first answer." # we set add_special_tokens=False because we dont want to automatically add a bos_token between messages second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant" second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False) second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1) output_ids_2 = model.generate(second_turn_input,) second_turn_answer = tokenizer.batch_decode(output_ids_2)[0] print(second_turn_answer) ``` **Safe inference with Azure AI Content Safety** The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2, the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and self-harm with multiple severity levels and multi-lingual detection. ```python import os import math import transformers import torch from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.ai.contentsafety.models import AnalyzeTextOptions CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"] CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"] # We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold # For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/ def should_filter_out(input_text, threshold=4): # Create an Content Safety client client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY)) # Construct a request request = AnalyzeTextOptions(text=input_text) # Analyze text try: response = client.analyze_text(request) except HttpResponseError as e: print("Analyze text failed.") if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}") raise print(e) raise categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"] max_score = -math.inf for category in categories: max_score = max(max_score, getattr(response, category).severity) return max_score >= threshold model_path = 'microsoft/Orca-2-7b' device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = transformers.AutoModelForCausalLM.from_pretrained(model_path) model.to(device) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=4096, padding_side="right", use_fast=False, add_special_tokens=False, ) system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior." user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No." prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to(device) output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True) sequence_length = inputs["input_ids"].shape[1] new_output_ids = output_ids[:, sequence_length:] answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True) final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]" print(final_output) ``` ## Citation ```bibtex @misc{mitra2023orca, title={Orca 2: Teaching Small Language Models How to Reason}, author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah}, year={2023}, eprint={2311.11045}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
corto-ai/nomic-embed-text-v1
corto-ai
"2024-05-06T05:18:56Z"
9,802
1
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "nomic_bert", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "custom_code", "en", "arxiv:2402.01613", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-05-06T05:04:53Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - mteb - transformers - transformers.js model-index: - name: epoch_0_model results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.8507462686567 - type: ap value: 40.592189159090495 - type: f1 value: 71.01634655512476 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.51892500000001 - type: ap value: 88.50346762975335 - type: f1 value: 91.50342077459624 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.364 - type: f1 value: 46.72708080922794 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 25.178 - type: map_at_10 value: 40.244 - type: map_at_100 value: 41.321999999999996 - type: map_at_1000 value: 41.331 - type: map_at_3 value: 35.016999999999996 - type: map_at_5 value: 37.99 - type: mrr_at_1 value: 25.605 - type: mrr_at_10 value: 40.422000000000004 - type: mrr_at_100 value: 41.507 - type: mrr_at_1000 value: 41.516 - type: mrr_at_3 value: 35.23 - type: mrr_at_5 value: 38.15 - type: ndcg_at_1 value: 25.178 - type: ndcg_at_10 value: 49.258 - type: ndcg_at_100 value: 53.776 - type: ndcg_at_1000 value: 53.995000000000005 - type: ndcg_at_3 value: 38.429 - type: ndcg_at_5 value: 43.803 - type: precision_at_1 value: 25.178 - type: precision_at_10 value: 7.831 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.121 - type: precision_at_5 value: 12.29 - type: recall_at_1 value: 25.178 - type: recall_at_10 value: 78.307 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.364000000000004 - type: recall_at_5 value: 61.451 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.93034494751465 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.64579480054327 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.601310529222054 - type: mrr value: 75.04484896451656 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.57797718095814 - type: cos_sim_spearman value: 86.47064499110101 - type: euclidean_pearson value: 87.4559602783142 - type: euclidean_spearman value: 86.47064499110101 - type: manhattan_pearson value: 87.7232764230245 - type: manhattan_spearman value: 86.91222131777742 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.5422077922078 - type: f1 value: 84.47657456950589 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.48953561974464 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.75995857510105 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.008000000000003 - type: map_at_10 value: 39.51 - type: map_at_100 value: 40.841 - type: map_at_1000 value: 40.973 - type: map_at_3 value: 36.248999999999995 - type: map_at_5 value: 38.096999999999994 - type: mrr_at_1 value: 36.481 - type: mrr_at_10 value: 44.818000000000005 - type: mrr_at_100 value: 45.64 - type: mrr_at_1000 value: 45.687 - type: mrr_at_3 value: 42.036 - type: mrr_at_5 value: 43.782 - type: ndcg_at_1 value: 36.481 - type: ndcg_at_10 value: 45.152 - type: ndcg_at_100 value: 50.449 - type: ndcg_at_1000 value: 52.76499999999999 - type: ndcg_at_3 value: 40.161 - type: ndcg_at_5 value: 42.577999999999996 - type: precision_at_1 value: 36.481 - type: precision_at_10 value: 8.369 - type: precision_at_100 value: 1.373 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 18.693 - type: precision_at_5 value: 13.533999999999999 - type: recall_at_1 value: 30.008000000000003 - type: recall_at_10 value: 56.108999999999995 - type: recall_at_100 value: 78.55499999999999 - type: recall_at_1000 value: 93.659 - type: recall_at_3 value: 41.754999999999995 - type: recall_at_5 value: 48.296 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.262 - type: map_at_10 value: 40.139 - type: map_at_100 value: 41.394 - type: map_at_1000 value: 41.526 - type: map_at_3 value: 37.155 - type: map_at_5 value: 38.785 - type: mrr_at_1 value: 38.153 - type: mrr_at_10 value: 46.369 - type: mrr_at_100 value: 47.072 - type: mrr_at_1000 value: 47.111999999999995 - type: mrr_at_3 value: 44.268 - type: mrr_at_5 value: 45.389 - type: ndcg_at_1 value: 38.153 - type: ndcg_at_10 value: 45.925 - type: ndcg_at_100 value: 50.394000000000005 - type: ndcg_at_1000 value: 52.37500000000001 - type: ndcg_at_3 value: 41.754000000000005 - type: ndcg_at_5 value: 43.574 - type: precision_at_1 value: 38.153 - type: precision_at_10 value: 8.796 - type: precision_at_100 value: 1.432 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 20.318 - type: precision_at_5 value: 14.395 - type: recall_at_1 value: 30.262 - type: recall_at_10 value: 55.72200000000001 - type: recall_at_100 value: 74.97500000000001 - type: recall_at_1000 value: 87.342 - type: recall_at_3 value: 43.129 - type: recall_at_5 value: 48.336 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.951 - type: map_at_10 value: 51.248000000000005 - type: map_at_100 value: 52.188 - type: map_at_1000 value: 52.247 - type: map_at_3 value: 48.211 - type: map_at_5 value: 49.797000000000004 - type: mrr_at_1 value: 45.329 - type: mrr_at_10 value: 54.749 - type: mrr_at_100 value: 55.367999999999995 - type: mrr_at_1000 value: 55.400000000000006 - type: mrr_at_3 value: 52.382 - type: mrr_at_5 value: 53.649 - type: ndcg_at_1 value: 45.329 - type: ndcg_at_10 value: 56.847 - type: ndcg_at_100 value: 60.738 - type: ndcg_at_1000 value: 61.976 - type: ndcg_at_3 value: 51.59 - type: ndcg_at_5 value: 53.915 - type: precision_at_1 value: 45.329 - type: precision_at_10 value: 8.959 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 22.612 - type: precision_at_5 value: 15.273 - type: recall_at_1 value: 39.951 - type: recall_at_10 value: 70.053 - type: recall_at_100 value: 86.996 - type: recall_at_1000 value: 95.707 - type: recall_at_3 value: 56.032000000000004 - type: recall_at_5 value: 61.629999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.566 - type: map_at_10 value: 33.207 - type: map_at_100 value: 34.166000000000004 - type: map_at_1000 value: 34.245 - type: map_at_3 value: 30.94 - type: map_at_5 value: 32.01 - type: mrr_at_1 value: 27.345000000000002 - type: mrr_at_10 value: 35.193000000000005 - type: mrr_at_100 value: 35.965 - type: mrr_at_1000 value: 36.028999999999996 - type: mrr_at_3 value: 32.806000000000004 - type: mrr_at_5 value: 34.021 - type: ndcg_at_1 value: 27.345000000000002 - type: ndcg_at_10 value: 37.891999999999996 - type: ndcg_at_100 value: 42.664 - type: ndcg_at_1000 value: 44.757000000000005 - type: ndcg_at_3 value: 33.123000000000005 - type: ndcg_at_5 value: 35.035 - type: precision_at_1 value: 27.345000000000002 - type: precision_at_10 value: 5.763 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 13.71 - type: precision_at_5 value: 9.401 - type: recall_at_1 value: 25.566 - type: recall_at_10 value: 50.563 - type: recall_at_100 value: 72.86399999999999 - type: recall_at_1000 value: 88.68599999999999 - type: recall_at_3 value: 37.43 - type: recall_at_5 value: 41.894999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.663 - type: map_at_10 value: 23.552 - type: map_at_100 value: 24.538 - type: map_at_1000 value: 24.661 - type: map_at_3 value: 21.085 - type: map_at_5 value: 22.391 - type: mrr_at_1 value: 20.025000000000002 - type: mrr_at_10 value: 27.643 - type: mrr_at_100 value: 28.499999999999996 - type: mrr_at_1000 value: 28.582 - type: mrr_at_3 value: 25.083 - type: mrr_at_5 value: 26.544 - type: ndcg_at_1 value: 20.025000000000002 - type: ndcg_at_10 value: 28.272000000000002 - type: ndcg_at_100 value: 33.353 - type: ndcg_at_1000 value: 36.454 - type: ndcg_at_3 value: 23.579 - type: ndcg_at_5 value: 25.685000000000002 - type: precision_at_1 value: 20.025000000000002 - type: precision_at_10 value: 5.187 - type: precision_at_100 value: 0.897 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 10.987 - type: precision_at_5 value: 8.06 - type: recall_at_1 value: 16.663 - type: recall_at_10 value: 38.808 - type: recall_at_100 value: 61.305 - type: recall_at_1000 value: 83.571 - type: recall_at_3 value: 25.907999999999998 - type: recall_at_5 value: 31.214 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.695999999999998 - type: map_at_10 value: 37.018 - type: map_at_100 value: 38.263000000000005 - type: map_at_1000 value: 38.371 - type: map_at_3 value: 34.226 - type: map_at_5 value: 35.809999999999995 - type: mrr_at_1 value: 32.916000000000004 - type: mrr_at_10 value: 42.067 - type: mrr_at_100 value: 42.925000000000004 - type: mrr_at_1000 value: 42.978 - type: mrr_at_3 value: 39.637 - type: mrr_at_5 value: 41.134 - type: ndcg_at_1 value: 32.916000000000004 - type: ndcg_at_10 value: 42.539 - type: ndcg_at_100 value: 47.873 - type: ndcg_at_1000 value: 50.08200000000001 - type: ndcg_at_3 value: 37.852999999999994 - type: ndcg_at_5 value: 40.201 - type: precision_at_1 value: 32.916000000000004 - type: precision_at_10 value: 7.5840000000000005 - type: precision_at_100 value: 1.199 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 17.485 - type: precision_at_5 value: 12.512 - type: recall_at_1 value: 27.695999999999998 - type: recall_at_10 value: 53.638 - type: recall_at_100 value: 76.116 - type: recall_at_1000 value: 91.069 - type: recall_at_3 value: 41.13 - type: recall_at_5 value: 46.872 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.108 - type: map_at_10 value: 33.372 - type: map_at_100 value: 34.656 - type: map_at_1000 value: 34.768 - type: map_at_3 value: 30.830999999999996 - type: map_at_5 value: 32.204 - type: mrr_at_1 value: 29.110000000000003 - type: mrr_at_10 value: 37.979 - type: mrr_at_100 value: 38.933 - type: mrr_at_1000 value: 38.988 - type: mrr_at_3 value: 35.731 - type: mrr_at_5 value: 36.963 - type: ndcg_at_1 value: 29.110000000000003 - type: ndcg_at_10 value: 38.635000000000005 - type: ndcg_at_100 value: 44.324999999999996 - type: ndcg_at_1000 value: 46.747 - type: ndcg_at_3 value: 34.37 - type: ndcg_at_5 value: 36.228 - type: precision_at_1 value: 29.110000000000003 - type: precision_at_10 value: 6.963 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.400000000000002 - type: precision_at_5 value: 11.552999999999999 - type: recall_at_1 value: 24.108 - type: recall_at_10 value: 49.597 - type: recall_at_100 value: 73.88900000000001 - type: recall_at_1000 value: 90.62400000000001 - type: recall_at_3 value: 37.662 - type: recall_at_5 value: 42.565 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.00791666666667 - type: map_at_10 value: 33.287749999999996 - type: map_at_100 value: 34.41141666666667 - type: map_at_1000 value: 34.52583333333333 - type: map_at_3 value: 30.734416666666668 - type: map_at_5 value: 32.137166666666666 - type: mrr_at_1 value: 29.305666666666664 - type: mrr_at_10 value: 37.22966666666666 - type: mrr_at_100 value: 38.066583333333334 - type: mrr_at_1000 value: 38.12616666666667 - type: mrr_at_3 value: 34.92275 - type: mrr_at_5 value: 36.23333333333334 - type: ndcg_at_1 value: 29.305666666666664 - type: ndcg_at_10 value: 38.25533333333333 - type: ndcg_at_100 value: 43.25266666666666 - type: ndcg_at_1000 value: 45.63583333333334 - type: ndcg_at_3 value: 33.777166666666666 - type: ndcg_at_5 value: 35.85 - type: precision_at_1 value: 29.305666666666664 - type: precision_at_10 value: 6.596416666666667 - type: precision_at_100 value: 1.0784166666666668 - type: precision_at_1000 value: 0.14666666666666664 - type: precision_at_3 value: 15.31075 - type: precision_at_5 value: 10.830916666666667 - type: recall_at_1 value: 25.00791666666667 - type: recall_at_10 value: 49.10933333333333 - type: recall_at_100 value: 71.09216666666667 - type: recall_at_1000 value: 87.77725000000001 - type: recall_at_3 value: 36.660916666666665 - type: recall_at_5 value: 41.94149999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.521 - type: map_at_10 value: 30.043 - type: map_at_100 value: 30.936000000000003 - type: map_at_1000 value: 31.022 - type: map_at_3 value: 27.926000000000002 - type: map_at_5 value: 29.076999999999998 - type: mrr_at_1 value: 26.227 - type: mrr_at_10 value: 32.822 - type: mrr_at_100 value: 33.61 - type: mrr_at_1000 value: 33.672000000000004 - type: mrr_at_3 value: 30.776999999999997 - type: mrr_at_5 value: 31.866 - type: ndcg_at_1 value: 26.227 - type: ndcg_at_10 value: 34.041 - type: ndcg_at_100 value: 38.394 - type: ndcg_at_1000 value: 40.732 - type: ndcg_at_3 value: 30.037999999999997 - type: ndcg_at_5 value: 31.845000000000002 - type: precision_at_1 value: 26.227 - type: precision_at_10 value: 5.244999999999999 - type: precision_at_100 value: 0.808 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 12.679000000000002 - type: precision_at_5 value: 8.773 - type: recall_at_1 value: 23.521 - type: recall_at_10 value: 43.633 - type: recall_at_100 value: 63.126000000000005 - type: recall_at_1000 value: 80.765 - type: recall_at_3 value: 32.614 - type: recall_at_5 value: 37.15 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.236 - type: map_at_10 value: 22.898 - type: map_at_100 value: 23.878 - type: map_at_1000 value: 24.009 - type: map_at_3 value: 20.87 - type: map_at_5 value: 22.025 - type: mrr_at_1 value: 19.339000000000002 - type: mrr_at_10 value: 26.382 - type: mrr_at_100 value: 27.245 - type: mrr_at_1000 value: 27.33 - type: mrr_at_3 value: 24.386 - type: mrr_at_5 value: 25.496000000000002 - type: ndcg_at_1 value: 19.339000000000002 - type: ndcg_at_10 value: 27.139999999999997 - type: ndcg_at_100 value: 31.944 - type: ndcg_at_1000 value: 35.077999999999996 - type: ndcg_at_3 value: 23.424 - type: ndcg_at_5 value: 25.188 - type: precision_at_1 value: 19.339000000000002 - type: precision_at_10 value: 4.8309999999999995 - type: precision_at_100 value: 0.845 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 10.874 - type: precision_at_5 value: 7.825 - type: recall_at_1 value: 16.236 - type: recall_at_10 value: 36.513 - type: recall_at_100 value: 57.999 - type: recall_at_1000 value: 80.512 - type: recall_at_3 value: 26.179999999999996 - type: recall_at_5 value: 30.712 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.11 - type: map_at_10 value: 31.566 - type: map_at_100 value: 32.647 - type: map_at_1000 value: 32.753 - type: map_at_3 value: 29.24 - type: map_at_5 value: 30.564999999999998 - type: mrr_at_1 value: 28.265 - type: mrr_at_10 value: 35.504000000000005 - type: mrr_at_100 value: 36.436 - type: mrr_at_1000 value: 36.503 - type: mrr_at_3 value: 33.349000000000004 - type: mrr_at_5 value: 34.622 - type: ndcg_at_1 value: 28.265 - type: ndcg_at_10 value: 36.192 - type: ndcg_at_100 value: 41.388000000000005 - type: ndcg_at_1000 value: 43.948 - type: ndcg_at_3 value: 31.959 - type: ndcg_at_5 value: 33.998 - type: precision_at_1 value: 28.265 - type: precision_at_10 value: 5.989 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 14.335 - type: precision_at_5 value: 10.112 - type: recall_at_1 value: 24.11 - type: recall_at_10 value: 46.418 - type: recall_at_100 value: 69.314 - type: recall_at_1000 value: 87.397 - type: recall_at_3 value: 34.724 - type: recall_at_5 value: 39.925 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.091 - type: map_at_10 value: 29.948999999999998 - type: map_at_100 value: 31.502000000000002 - type: map_at_1000 value: 31.713 - type: map_at_3 value: 27.464 - type: map_at_5 value: 28.968 - type: mrr_at_1 value: 26.482 - type: mrr_at_10 value: 34.009 - type: mrr_at_100 value: 35.081 - type: mrr_at_1000 value: 35.138000000000005 - type: mrr_at_3 value: 31.785000000000004 - type: mrr_at_5 value: 33.178999999999995 - type: ndcg_at_1 value: 26.482 - type: ndcg_at_10 value: 35.008 - type: ndcg_at_100 value: 41.272999999999996 - type: ndcg_at_1000 value: 43.972 - type: ndcg_at_3 value: 30.804 - type: ndcg_at_5 value: 33.046 - type: precision_at_1 value: 26.482 - type: precision_at_10 value: 6.462 - type: precision_at_100 value: 1.431 - type: precision_at_1000 value: 0.22899999999999998 - type: precision_at_3 value: 14.360999999999999 - type: precision_at_5 value: 10.474 - type: recall_at_1 value: 22.091 - type: recall_at_10 value: 45.125 - type: recall_at_100 value: 72.313 - type: recall_at_1000 value: 89.503 - type: recall_at_3 value: 33.158 - type: recall_at_5 value: 39.086999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.883 - type: map_at_10 value: 26.951000000000004 - type: map_at_100 value: 27.927999999999997 - type: map_at_1000 value: 28.022000000000002 - type: map_at_3 value: 24.616 - type: map_at_5 value: 25.917 - type: mrr_at_1 value: 21.996 - type: mrr_at_10 value: 29.221000000000004 - type: mrr_at_100 value: 30.024 - type: mrr_at_1000 value: 30.095 - type: mrr_at_3 value: 26.833000000000002 - type: mrr_at_5 value: 28.155 - type: ndcg_at_1 value: 21.996 - type: ndcg_at_10 value: 31.421 - type: ndcg_at_100 value: 36.237 - type: ndcg_at_1000 value: 38.744 - type: ndcg_at_3 value: 26.671 - type: ndcg_at_5 value: 28.907 - type: precision_at_1 value: 21.996 - type: precision_at_10 value: 5.009 - type: precision_at_100 value: 0.799 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 11.275 - type: precision_at_5 value: 8.059 - type: recall_at_1 value: 19.883 - type: recall_at_10 value: 43.132999999999996 - type: recall_at_100 value: 65.654 - type: recall_at_1000 value: 84.492 - type: recall_at_3 value: 30.209000000000003 - type: recall_at_5 value: 35.616 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 17.756 - type: map_at_10 value: 30.378 - type: map_at_100 value: 32.537 - type: map_at_1000 value: 32.717 - type: map_at_3 value: 25.599 - type: map_at_5 value: 28.372999999999998 - type: mrr_at_1 value: 41.303 - type: mrr_at_10 value: 53.483999999999995 - type: mrr_at_100 value: 54.106 - type: mrr_at_1000 value: 54.127 - type: mrr_at_3 value: 50.315 - type: mrr_at_5 value: 52.396 - type: ndcg_at_1 value: 41.303 - type: ndcg_at_10 value: 40.503 - type: ndcg_at_100 value: 47.821000000000005 - type: ndcg_at_1000 value: 50.788 - type: ndcg_at_3 value: 34.364 - type: ndcg_at_5 value: 36.818 - type: precision_at_1 value: 41.303 - type: precision_at_10 value: 12.463000000000001 - type: precision_at_100 value: 2.037 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 25.798 - type: precision_at_5 value: 19.896 - type: recall_at_1 value: 17.756 - type: recall_at_10 value: 46.102 - type: recall_at_100 value: 70.819 - type: recall_at_1000 value: 87.21799999999999 - type: recall_at_3 value: 30.646 - type: recall_at_5 value: 38.022 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.033 - type: map_at_10 value: 20.584 - type: map_at_100 value: 29.518 - type: map_at_1000 value: 31.186000000000003 - type: map_at_3 value: 14.468 - type: map_at_5 value: 17.177 - type: mrr_at_1 value: 69.75 - type: mrr_at_10 value: 77.025 - type: mrr_at_100 value: 77.36699999999999 - type: mrr_at_1000 value: 77.373 - type: mrr_at_3 value: 75.583 - type: mrr_at_5 value: 76.396 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 45.033 - type: ndcg_at_100 value: 49.071 - type: ndcg_at_1000 value: 56.056 - type: ndcg_at_3 value: 49.936 - type: ndcg_at_5 value: 47.471999999999994 - type: precision_at_1 value: 69.75 - type: precision_at_10 value: 35.775 - type: precision_at_100 value: 11.594999999999999 - type: precision_at_1000 value: 2.062 - type: precision_at_3 value: 52.5 - type: precision_at_5 value: 45.300000000000004 - type: recall_at_1 value: 9.033 - type: recall_at_10 value: 26.596999999999998 - type: recall_at_100 value: 54.607000000000006 - type: recall_at_1000 value: 76.961 - type: recall_at_3 value: 15.754999999999999 - type: recall_at_5 value: 20.033 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.345000000000006 - type: f1 value: 43.4514918068706 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 71.29100000000001 - type: map_at_10 value: 81.059 - type: map_at_100 value: 81.341 - type: map_at_1000 value: 81.355 - type: map_at_3 value: 79.74799999999999 - type: map_at_5 value: 80.612 - type: mrr_at_1 value: 76.40299999999999 - type: mrr_at_10 value: 84.615 - type: mrr_at_100 value: 84.745 - type: mrr_at_1000 value: 84.748 - type: mrr_at_3 value: 83.776 - type: mrr_at_5 value: 84.343 - type: ndcg_at_1 value: 76.40299999999999 - type: ndcg_at_10 value: 84.981 - type: ndcg_at_100 value: 86.00999999999999 - type: ndcg_at_1000 value: 86.252 - type: ndcg_at_3 value: 82.97 - type: ndcg_at_5 value: 84.152 - type: precision_at_1 value: 76.40299999999999 - type: precision_at_10 value: 10.446 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 32.147999999999996 - type: precision_at_5 value: 20.135 - type: recall_at_1 value: 71.29100000000001 - type: recall_at_10 value: 93.232 - type: recall_at_100 value: 97.363 - type: recall_at_1000 value: 98.905 - type: recall_at_3 value: 87.893 - type: recall_at_5 value: 90.804 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 18.667 - type: map_at_10 value: 30.853 - type: map_at_100 value: 32.494 - type: map_at_1000 value: 32.677 - type: map_at_3 value: 26.91 - type: map_at_5 value: 29.099000000000004 - type: mrr_at_1 value: 37.191 - type: mrr_at_10 value: 46.171 - type: mrr_at_100 value: 47.056 - type: mrr_at_1000 value: 47.099000000000004 - type: mrr_at_3 value: 44.059 - type: mrr_at_5 value: 45.147 - type: ndcg_at_1 value: 37.191 - type: ndcg_at_10 value: 38.437 - type: ndcg_at_100 value: 44.62 - type: ndcg_at_1000 value: 47.795 - type: ndcg_at_3 value: 35.003 - type: ndcg_at_5 value: 36.006 - type: precision_at_1 value: 37.191 - type: precision_at_10 value: 10.586 - type: precision_at_100 value: 1.688 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_3 value: 23.302 - type: precision_at_5 value: 17.006 - type: recall_at_1 value: 18.667 - type: recall_at_10 value: 45.367000000000004 - type: recall_at_100 value: 68.207 - type: recall_at_1000 value: 87.072 - type: recall_at_3 value: 32.129000000000005 - type: recall_at_5 value: 37.719 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.494 - type: map_at_10 value: 66.223 - type: map_at_100 value: 67.062 - type: map_at_1000 value: 67.11500000000001 - type: map_at_3 value: 62.867 - type: map_at_5 value: 64.994 - type: mrr_at_1 value: 78.987 - type: mrr_at_10 value: 84.585 - type: mrr_at_100 value: 84.773 - type: mrr_at_1000 value: 84.77900000000001 - type: mrr_at_3 value: 83.592 - type: mrr_at_5 value: 84.235 - type: ndcg_at_1 value: 78.987 - type: ndcg_at_10 value: 73.64 - type: ndcg_at_100 value: 76.519 - type: ndcg_at_1000 value: 77.51 - type: ndcg_at_3 value: 68.893 - type: ndcg_at_5 value: 71.585 - type: precision_at_1 value: 78.987 - type: precision_at_10 value: 15.529000000000002 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.808 - type: precision_at_5 value: 29.006999999999998 - type: recall_at_1 value: 39.494 - type: recall_at_10 value: 77.643 - type: recall_at_100 value: 88.825 - type: recall_at_1000 value: 95.321 - type: recall_at_3 value: 67.211 - type: recall_at_5 value: 72.519 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 85.55959999999999 - type: ap value: 80.7246500384617 - type: f1 value: 85.52336485065454 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 23.631 - type: map_at_10 value: 36.264 - type: map_at_100 value: 37.428 - type: map_at_1000 value: 37.472 - type: map_at_3 value: 32.537 - type: map_at_5 value: 34.746 - type: mrr_at_1 value: 24.312 - type: mrr_at_10 value: 36.858000000000004 - type: mrr_at_100 value: 37.966 - type: mrr_at_1000 value: 38.004 - type: mrr_at_3 value: 33.188 - type: mrr_at_5 value: 35.367 - type: ndcg_at_1 value: 24.312 - type: ndcg_at_10 value: 43.126999999999995 - type: ndcg_at_100 value: 48.642 - type: ndcg_at_1000 value: 49.741 - type: ndcg_at_3 value: 35.589 - type: ndcg_at_5 value: 39.515 - type: precision_at_1 value: 24.312 - type: precision_at_10 value: 6.699 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.153 - type: precision_at_5 value: 11.065999999999999 - type: recall_at_1 value: 23.631 - type: recall_at_10 value: 64.145 - type: recall_at_100 value: 89.41 - type: recall_at_1000 value: 97.83500000000001 - type: recall_at_3 value: 43.769000000000005 - type: recall_at_5 value: 53.169 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.4108527131783 - type: f1 value: 93.1415880261038 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.24806201550388 - type: f1 value: 60.531916308197175 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.71553463349024 - type: f1 value: 71.70753174900791 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.79757901815736 - type: f1 value: 77.83719850433258 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.74193296622113 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.64257594108566 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.811018518883625 - type: mrr value: 31.910376577445003 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.409 - type: map_at_10 value: 13.093 - type: map_at_100 value: 16.256999999999998 - type: map_at_1000 value: 17.617 - type: map_at_3 value: 9.555 - type: map_at_5 value: 11.428 - type: mrr_at_1 value: 45.201 - type: mrr_at_10 value: 54.179 - type: mrr_at_100 value: 54.812000000000005 - type: mrr_at_1000 value: 54.840999999999994 - type: mrr_at_3 value: 51.909000000000006 - type: mrr_at_5 value: 53.519000000000005 - type: ndcg_at_1 value: 43.189 - type: ndcg_at_10 value: 35.028 - type: ndcg_at_100 value: 31.226 - type: ndcg_at_1000 value: 39.678000000000004 - type: ndcg_at_3 value: 40.596 - type: ndcg_at_5 value: 38.75 - type: precision_at_1 value: 44.582 - type: precision_at_10 value: 25.974999999999998 - type: precision_at_100 value: 7.793 - type: precision_at_1000 value: 2.036 - type: precision_at_3 value: 38.493 - type: precision_at_5 value: 33.994 - type: recall_at_1 value: 5.409 - type: recall_at_10 value: 16.875999999999998 - type: recall_at_100 value: 30.316 - type: recall_at_1000 value: 60.891 - type: recall_at_3 value: 10.688 - type: recall_at_5 value: 13.832 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.375 - type: map_at_10 value: 51.991 - type: map_at_100 value: 52.91400000000001 - type: map_at_1000 value: 52.93600000000001 - type: map_at_3 value: 48.014 - type: map_at_5 value: 50.381 - type: mrr_at_1 value: 40.759 - type: mrr_at_10 value: 54.617000000000004 - type: mrr_at_100 value: 55.301 - type: mrr_at_1000 value: 55.315000000000005 - type: mrr_at_3 value: 51.516 - type: mrr_at_5 value: 53.435 - type: ndcg_at_1 value: 40.759 - type: ndcg_at_10 value: 59.384 - type: ndcg_at_100 value: 63.157 - type: ndcg_at_1000 value: 63.654999999999994 - type: ndcg_at_3 value: 52.114000000000004 - type: ndcg_at_5 value: 55.986000000000004 - type: precision_at_1 value: 40.759 - type: precision_at_10 value: 9.411999999999999 - type: precision_at_100 value: 1.153 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.329 - type: precision_at_5 value: 16.256999999999998 - type: recall_at_1 value: 36.375 - type: recall_at_10 value: 79.053 - type: recall_at_100 value: 95.167 - type: recall_at_1000 value: 98.82 - type: recall_at_3 value: 60.475 - type: recall_at_5 value: 69.327 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.256 - type: map_at_10 value: 83.8 - type: map_at_100 value: 84.425 - type: map_at_1000 value: 84.444 - type: map_at_3 value: 80.906 - type: map_at_5 value: 82.717 - type: mrr_at_1 value: 80.97999999999999 - type: mrr_at_10 value: 87.161 - type: mrr_at_100 value: 87.262 - type: mrr_at_1000 value: 87.263 - type: mrr_at_3 value: 86.175 - type: mrr_at_5 value: 86.848 - type: ndcg_at_1 value: 80.97999999999999 - type: ndcg_at_10 value: 87.697 - type: ndcg_at_100 value: 88.959 - type: ndcg_at_1000 value: 89.09899999999999 - type: ndcg_at_3 value: 84.83800000000001 - type: ndcg_at_5 value: 86.401 - type: precision_at_1 value: 80.97999999999999 - type: precision_at_10 value: 13.261000000000001 - type: precision_at_100 value: 1.5150000000000001 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 37.01 - type: precision_at_5 value: 24.298000000000002 - type: recall_at_1 value: 70.256 - type: recall_at_10 value: 94.935 - type: recall_at_100 value: 99.274 - type: recall_at_1000 value: 99.928 - type: recall_at_3 value: 86.602 - type: recall_at_5 value: 91.133 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.322692497613104 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.895813503775074 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.338 - type: map_at_10 value: 10.767 - type: map_at_100 value: 12.537999999999998 - type: map_at_1000 value: 12.803999999999998 - type: map_at_3 value: 7.788 - type: map_at_5 value: 9.302000000000001 - type: mrr_at_1 value: 21.4 - type: mrr_at_10 value: 31.637999999999998 - type: mrr_at_100 value: 32.688 - type: mrr_at_1000 value: 32.756 - type: mrr_at_3 value: 28.433000000000003 - type: mrr_at_5 value: 30.178 - type: ndcg_at_1 value: 21.4 - type: ndcg_at_10 value: 18.293 - type: ndcg_at_100 value: 25.274 - type: ndcg_at_1000 value: 30.284 - type: ndcg_at_3 value: 17.391000000000002 - type: ndcg_at_5 value: 15.146999999999998 - type: precision_at_1 value: 21.4 - type: precision_at_10 value: 9.48 - type: precision_at_100 value: 1.949 - type: precision_at_1000 value: 0.316 - type: precision_at_3 value: 16.167 - type: precision_at_5 value: 13.22 - type: recall_at_1 value: 4.338 - type: recall_at_10 value: 19.213 - type: recall_at_100 value: 39.562999999999995 - type: recall_at_1000 value: 64.08 - type: recall_at_3 value: 9.828000000000001 - type: recall_at_5 value: 13.383000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.42568163642142 - type: cos_sim_spearman value: 78.5797159641342 - type: euclidean_pearson value: 80.22151260811604 - type: euclidean_spearman value: 78.5797151953878 - type: manhattan_pearson value: 80.21224215864788 - type: manhattan_spearman value: 78.55641478381344 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.44020710812569 - type: cos_sim_spearman value: 78.91631735081286 - type: euclidean_pearson value: 81.64188964182102 - type: euclidean_spearman value: 78.91633286881678 - type: manhattan_pearson value: 81.69294748512496 - type: manhattan_spearman value: 78.93438558002656 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.27165426412311 - type: cos_sim_spearman value: 85.40429140249618 - type: euclidean_pearson value: 84.7509580724893 - type: euclidean_spearman value: 85.40429140249618 - type: manhattan_pearson value: 84.76488289321308 - type: manhattan_spearman value: 85.4256793698708 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.138851760732 - type: cos_sim_spearman value: 81.64101363896586 - type: euclidean_pearson value: 82.55165038934942 - type: euclidean_spearman value: 81.64105257080502 - type: manhattan_pearson value: 82.52802949883335 - type: manhattan_spearman value: 81.61255430718158 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.0654695484029 - type: cos_sim_spearman value: 87.20408521902229 - type: euclidean_pearson value: 86.8110651362115 - type: euclidean_spearman value: 87.20408521902229 - type: manhattan_pearson value: 86.77984656478691 - type: manhattan_spearman value: 87.1719947099227 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.77823915496512 - type: cos_sim_spearman value: 85.43566325729779 - type: euclidean_pearson value: 84.5396956658821 - type: euclidean_spearman value: 85.43566325729779 - type: manhattan_pearson value: 84.5665398848169 - type: manhattan_spearman value: 85.44375870303232 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.20030208471798 - type: cos_sim_spearman value: 87.20485505076539 - type: euclidean_pearson value: 88.10588324368722 - type: euclidean_spearman value: 87.20485505076539 - type: manhattan_pearson value: 87.92324770415183 - type: manhattan_spearman value: 87.0571314561877 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.06093161604453 - type: cos_sim_spearman value: 64.2163140357722 - type: euclidean_pearson value: 65.27589680994006 - type: euclidean_spearman value: 64.2163140357722 - type: manhattan_pearson value: 65.45904383711101 - type: manhattan_spearman value: 64.55404716679305 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.32976164578706 - type: cos_sim_spearman value: 85.54302197678368 - type: euclidean_pearson value: 85.26307149193056 - type: euclidean_spearman value: 85.54302197678368 - type: manhattan_pearson value: 85.26647282029371 - type: manhattan_spearman value: 85.5316135265568 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.44675968318754 - type: mrr value: 94.92741826075158 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 56.34400000000001 - type: map_at_10 value: 65.927 - type: map_at_100 value: 66.431 - type: map_at_1000 value: 66.461 - type: map_at_3 value: 63.529 - type: map_at_5 value: 64.818 - type: mrr_at_1 value: 59.333000000000006 - type: mrr_at_10 value: 67.54599999999999 - type: mrr_at_100 value: 67.892 - type: mrr_at_1000 value: 67.917 - type: mrr_at_3 value: 65.778 - type: mrr_at_5 value: 66.794 - type: ndcg_at_1 value: 59.333000000000006 - type: ndcg_at_10 value: 70.5 - type: ndcg_at_100 value: 72.688 - type: ndcg_at_1000 value: 73.483 - type: ndcg_at_3 value: 66.338 - type: ndcg_at_5 value: 68.265 - type: precision_at_1 value: 59.333000000000006 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.889 - type: precision_at_5 value: 16.866999999999997 - type: recall_at_1 value: 56.34400000000001 - type: recall_at_10 value: 82.789 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 71.64399999999999 - type: recall_at_5 value: 76.322 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.75742574257426 - type: cos_sim_ap value: 93.52081548447406 - type: cos_sim_f1 value: 87.33850129198966 - type: cos_sim_precision value: 90.37433155080214 - type: cos_sim_recall value: 84.5 - type: dot_accuracy value: 99.75742574257426 - type: dot_ap value: 93.52081548447406 - type: dot_f1 value: 87.33850129198966 - type: dot_precision value: 90.37433155080214 - type: dot_recall value: 84.5 - type: euclidean_accuracy value: 99.75742574257426 - type: euclidean_ap value: 93.52081548447406 - type: euclidean_f1 value: 87.33850129198966 - type: euclidean_precision value: 90.37433155080214 - type: euclidean_recall value: 84.5 - type: manhattan_accuracy value: 99.75841584158415 - type: manhattan_ap value: 93.4975678585854 - type: manhattan_f1 value: 87.26708074534162 - type: manhattan_precision value: 90.45064377682404 - type: manhattan_recall value: 84.3 - type: max_accuracy value: 99.75841584158415 - type: max_ap value: 93.52081548447406 - type: max_f1 value: 87.33850129198966 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 64.31437036686651 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.25569319007206 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.90474939720706 - type: mrr value: 50.568115503777264 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.866828641244712 - type: cos_sim_spearman value: 30.077555055873866 - type: dot_pearson value: 29.866832988572266 - type: dot_spearman value: 30.077555055873866 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.232 - type: map_at_10 value: 2.094 - type: map_at_100 value: 11.971 - type: map_at_1000 value: 28.158 - type: map_at_3 value: 0.688 - type: map_at_5 value: 1.114 - type: mrr_at_1 value: 88 - type: mrr_at_10 value: 93.4 - type: mrr_at_100 value: 93.4 - type: mrr_at_1000 value: 93.4 - type: mrr_at_3 value: 93 - type: mrr_at_5 value: 93.4 - type: ndcg_at_1 value: 84 - type: ndcg_at_10 value: 79.923 - type: ndcg_at_100 value: 61.17 - type: ndcg_at_1000 value: 53.03 - type: ndcg_at_3 value: 84.592 - type: ndcg_at_5 value: 82.821 - type: precision_at_1 value: 88 - type: precision_at_10 value: 85 - type: precision_at_100 value: 63.019999999999996 - type: precision_at_1000 value: 23.554 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.232 - type: recall_at_10 value: 2.255 - type: recall_at_100 value: 14.823 - type: recall_at_1000 value: 49.456 - type: recall_at_3 value: 0.718 - type: recall_at_5 value: 1.175 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.547 - type: map_at_10 value: 11.375 - type: map_at_100 value: 18.194 - type: map_at_1000 value: 19.749 - type: map_at_3 value: 5.825 - type: map_at_5 value: 8.581 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 51.32 - type: mrr_at_100 value: 51.747 - type: mrr_at_1000 value: 51.747 - type: mrr_at_3 value: 47.278999999999996 - type: mrr_at_5 value: 48.605 - type: ndcg_at_1 value: 29.592000000000002 - type: ndcg_at_10 value: 28.151 - type: ndcg_at_100 value: 39.438 - type: ndcg_at_1000 value: 50.769 - type: ndcg_at_3 value: 30.758999999999997 - type: ndcg_at_5 value: 30.366 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.041 - type: precision_at_1000 value: 1.555 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 31.837 - type: recall_at_1 value: 2.547 - type: recall_at_10 value: 18.19 - type: recall_at_100 value: 49.538 - type: recall_at_1000 value: 83.86 - type: recall_at_3 value: 7.329 - type: recall_at_5 value: 11.532 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.4952 - type: ap value: 14.793362635531409 - type: f1 value: 55.204635551516915 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.5365025466893 - type: f1 value: 61.81742556334845 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.05531070301185 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.51725576682364 - type: cos_sim_ap value: 75.2292304265163 - type: cos_sim_f1 value: 69.54022988505749 - type: cos_sim_precision value: 63.65629110039457 - type: cos_sim_recall value: 76.62269129287598 - type: dot_accuracy value: 86.51725576682364 - type: dot_ap value: 75.22922386081054 - type: dot_f1 value: 69.54022988505749 - type: dot_precision value: 63.65629110039457 - type: dot_recall value: 76.62269129287598 - type: euclidean_accuracy value: 86.51725576682364 - type: euclidean_ap value: 75.22925730473472 - type: euclidean_f1 value: 69.54022988505749 - type: euclidean_precision value: 63.65629110039457 - type: euclidean_recall value: 76.62269129287598 - type: manhattan_accuracy value: 86.52321630804077 - type: manhattan_ap value: 75.20608115037336 - type: manhattan_f1 value: 69.60000000000001 - type: manhattan_precision value: 64.37219730941705 - type: manhattan_recall value: 75.75197889182058 - type: max_accuracy value: 86.52321630804077 - type: max_ap value: 75.22925730473472 - type: max_f1 value: 69.60000000000001 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.34877944657896 - type: cos_sim_ap value: 86.71257569277373 - type: cos_sim_f1 value: 79.10386355986088 - type: cos_sim_precision value: 76.91468470434214 - type: cos_sim_recall value: 81.4213119802895 - type: dot_accuracy value: 89.34877944657896 - type: dot_ap value: 86.71257133133368 - type: dot_f1 value: 79.10386355986088 - type: dot_precision value: 76.91468470434214 - type: dot_recall value: 81.4213119802895 - type: euclidean_accuracy value: 89.34877944657896 - type: euclidean_ap value: 86.71257651501476 - type: euclidean_f1 value: 79.10386355986088 - type: euclidean_precision value: 76.91468470434214 - type: euclidean_recall value: 81.4213119802895 - type: manhattan_accuracy value: 89.35848177901967 - type: manhattan_ap value: 86.69330615469126 - type: manhattan_f1 value: 79.13867741453949 - type: manhattan_precision value: 76.78881807647741 - type: manhattan_recall value: 81.63689559593472 - type: max_accuracy value: 89.35848177901967 - type: max_ap value: 86.71257651501476 - type: max_f1 value: 79.13867741453949 license: apache-2.0 language: - en --- # nomic-embed-text-v1: A Reproducible Long Context (8192) Text Embedder `nomic-embed-text-v1` is 8192 context length text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks. | Name | SeqLen | MTEB | LoCo | Jina Long Context | Open Weights | Open Training Code | Open Data | | :-------------------------------:| :----- | :-------- | :------: | :---------------: | :-----------: | :----------------: | :---------- | | nomic-embed-text-v1 | 8192 | **62.39** |**85.53** | 54.16 | ✅ | ✅ | ✅ | | jina-embeddings-v2-base-en | 8192 | 60.39 | 85.45 | 51.90 | ✅ | ❌ | ❌ | | text-embedding-3-small | 8191 | 62.26 | 82.40 | **58.20** | ❌ | ❌ | ❌ | | text-embedding-ada-002 | 8191 | 60.99 | 52.7 | 55.25 | ❌ | ❌ | ❌ | ## Hosted Inference API The easiest way to get started with Nomic Embed is through the Nomic Embedding API. Generating embeddings with the `nomic` Python client is as easy as ```python from nomic import embed output = embed.text( texts=['Nomic Embedding API', '#keepAIOpen'], model='nomic-embed-text-v1', task_type='search_document' ) print(output) ``` For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-text) ## Data Visualization Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data! [![image/webp](https://cdn-uploads.huggingface.co/production/uploads/607997c83a565c15675055b3/pjhJhuNyRfPagRd_c_iUz.webp)](https://atlas.nomic.ai/map/nomic-text-embed-v1-5m-sample) ## Training Details We train our embedder using a multi-stage training pipeline. Starting from a long-context [BERT model](https://huggingface.co/nomic-ai/nomic-bert-2048), the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles. In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage. For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-text-v1). Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors) ## Usage Note `nomic-embed-text` requires prefixes! We support the prefixes `[search_query, search_document, classification, clustering]`. For retrieval applications, you should prepend `search_document` for all your documents and `search_query` for your queries. ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True) sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'] embeddings = model.encode(sentences) print(embeddings) ``` ### Transformers ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'] tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True) model.eval() encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) print(embeddings) ``` The model natively supports scaling of the sequence length past 2048 tokens. To do so, ```diff - tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') + tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192) - model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True) + model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2) ``` ### Transformers.js ```js import { pipeline } from '@xenova/transformers'; // Create a feature extraction pipeline const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1', { quantized: false, // Comment out this line to use the quantized version }); // Compute sentence embeddings const texts = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']; const embeddings = await extractor(texts, { pooling: 'mean', normalize: true }); console.log(embeddings); ``` # Join the Nomic Community - Nomic: [https://nomic.ai](https://nomic.ai) - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8) - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai) # Citation If you find the model, dataset, or training code useful, please cite our work ```bibtex @misc{nussbaum2024nomic, title={Nomic Embed: Training a Reproducible Long Context Text Embedder}, author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar}, year={2024}, eprint={2402.01613}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3
AlignmentResearch
"2024-03-15T09:41:56Z"
9,801
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-03-15T09:41:50Z"
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-tt-14m-mz-ada-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-14m-mz-ada-v3 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
timm/mobilenetv3_small_100.lamb_in1k
timm
"2023-04-27T22:49:35Z"
9,786
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:1905.02244", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-16T05:38:36Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for mobilenetv3_small_100.lamb_in1k A MobileNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * A LAMB optimizer recipe that is similar to [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `A2` but 50% longer with EMA weight averaging, no CutMix * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 2.5 - GMACs: 0.1 - Activations (M): 1.4 - Image size: 224 x 224 - **Papers:** - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilenetv3_small_100.lamb_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_small_100.lamb_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 16, 56, 56]) # torch.Size([1, 24, 28, 28]) # torch.Size([1, 48, 14, 14]) # torch.Size([1, 576, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_small_100.lamb_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 576, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{howard2019searching, title={Searching for mobilenetv3}, author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={1314--1324}, year={2019} } ```
Local-Novel-LLM-project/Vecteus2-gguf
Local-Novel-LLM-project
"2024-06-20T23:47:46Z"
9,785
1
null
[ "gguf", "region:us" ]
null
"2024-06-20T22:31:03Z"
Entry not found
MoritzLaurer/deberta-v3-large-zeroshot-v1
MoritzLaurer
"2023-11-29T19:30:53Z"
9,778
19
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2023-10-03T03:24:13Z"
--- language: - en tags: - text-classification - zero-shot-classification pipeline_tag: zero-shot-classification library_name: transformers license: mit --- # deberta-v3-large-zeroshot-v1 ## Model description The model is designed for zero-shot classification with the Hugging Face pipeline. The model should be substantially better at zero-shot classification than my other zero-shot models on the Hugging Face hub: https://huggingface.co/MoritzLaurer. The model can do one universal task: determine whether a hypothesis is `true` or `not_true` given a text (also called `entailment` vs. `not_entailment`). This task format is based on the Natural Language Inference task (NLI). The task is so universal that any classification task can be reformulated into the task. ## Training data The model was trained on a mixture of 27 tasks and 310 classes that have been reformatted into this universal format. 1. 26 classification tasks with ~400k texts: 'amazonpolarity', 'imdb', 'appreviews', 'yelpreviews', 'rottentomatoes', 'emotiondair', 'emocontext', 'empathetic', 'financialphrasebank', 'banking77', 'massive', 'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate', 'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent', 'agnews', 'yahootopics', 'trueteacher', 'spam', 'wellformedquery'. See details on each dataset here: https://docs.google.com/spreadsheets/d/1Z18tMh02IiWgh6o8pfoMiI_LH4IXpr78wd_nmNd5FaE/edit?usp=sharing 3. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling" Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`) as opposed to three classes (entailment/neutral/contradiction) ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-large-zeroshot-v1") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` ### Details on data and training The code for preparing the data and training & evaluating the model is fully open-source here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main ## Limitations and bias The model can only do text classification tasks. Please consult the original DeBERTa paper and the papers for the different datasets for potential biases. ## License The base model (DeBERTa-v3) is published under the MIT license. The datasets the model was fine-tuned on are published under a diverse set of licenses. The following spreadsheet provides an overview of the non-NLI datasets used for fine-tuning. The spreadsheets contains information on licenses, the underlying papers etc.: https://docs.google.com/spreadsheets/d/1Z18tMh02IiWgh6o8pfoMiI_LH4IXpr78wd_nmNd5FaE/edit?usp=sharing In addition, the model was also trained on the following NLI datasets: MNLI, ANLI, WANLI, LING-NLI, FEVER-NLI. ## Citation If you use this model, please cite: ``` @article{laurer_less_2023, title = {Less {Annotating}, {More} {Classifying}: {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT}-{NLI}}, issn = {1047-1987, 1476-4989}, shorttitle = {Less {Annotating}, {More} {Classifying}}, url = {https://www.cambridge.org/core/product/identifier/S1047198723000207/type/journal_article}, doi = {10.1017/pan.2023.20}, language = {en}, urldate = {2023-06-20}, journal = {Political Analysis}, author = {Laurer, Moritz and Van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper}, month = jun, year = {2023}, pages = {1--33}, } ``` ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
TheBloke/Llama-2-70B-Chat-GPTQ
TheBloke
"2023-09-27T12:44:49Z"
9,771
257
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "arxiv:2307.09288", "base_model:meta-llama/Llama-2-70b-chat-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-07-18T23:33:13Z"
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 model_name: Llama 2 70B Chat base_model: meta-llama/Llama-2-70b-chat-hf inference: false model_creator: Meta Llama 2 model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <</SYS>> {prompt}[/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 70B Chat - GPTQ - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) - Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) <!-- description start --> ## Description This repo contains GPTQ model files for [Meta Llama 2's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF) * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-3bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-64g-actorder_True) | 3 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 29.30 GB | No | 3-bit, with group size 64g and act-order. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-70B-chat-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Llama-2-70B-chat-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Meta Llama 2's Llama 2 70B Chat # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
optimum/all-MiniLM-L6-v2
optimum
"2022-03-24T16:16:57Z"
9,768
11
sentence-transformers
[ "sentence-transformers", "onnx", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-24T16:15:58Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # ONNX convert all-MiniLM-L6-v2 ## Conversion of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
mradermacher/Symbol-LLM-7B-Instruct-GGUF
mradermacher
"2024-06-23T18:08:49Z"
9,759
0
transformers
[ "transformers", "gguf", "en", "base_model:Symbol-LLM/Symbol-LLM-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T15:41:43Z"
--- base_model: Symbol-LLM/Symbol-LLM-7B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Symbol-LLM/Symbol-LLM-7B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-7B-Instruct-GGUF/resolve/main/Symbol-LLM-7B-Instruct.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
sentence-transformers/msmarco-distilbert-base-v3
sentence-transformers
"2024-03-27T11:27:36Z"
9,757
4
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "safetensors", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # sentence-transformers/msmarco-distilbert-base-v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-base-v3') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-base-v3') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-v3) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
QuantFactory/Nymph_8B-GGUF
QuantFactory
"2024-06-24T06:40:52Z"
9,754
0
null
[ "gguf", "not-for-all-audiences", "text-generation", "en", "dataset:Setiaku/Stheno-v3.2", "dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned", "dataset:openerotica/freedom-rp", "dataset:MinervaAI/Aesir-Preview", "dataset:jeiku/JeikuL3v2", "dataset:ResplendentAI/Sissification_Hypno_1k", "dataset:ResplendentAI/Synthetic_Soul_1k", "dataset:ResplendentAI/theory_of_mind_fixed_output", "base_model:ResplendentAI/Nymph_8B", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-24T03:55:32Z"
--- license: apache-2.0 datasets: - Setiaku/Stheno-v3.2 - Squish42/bluemoon-fandom-1-1-rp-cleaned - openerotica/freedom-rp - MinervaAI/Aesir-Preview - jeiku/JeikuL3v2 - ResplendentAI/Sissification_Hypno_1k - ResplendentAI/Synthetic_Soul_1k - ResplendentAI/theory_of_mind_fixed_output language: - en base_model: ResplendentAI/Nymph_8B tags: - not-for-all-audiences pipeline_tag: text-generation --- # QuantFactory/Nymph_8B-GGUF This is quantized version of [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B?not-for-all-audiences=true) created using llama.cpp # Model Description ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/9U_eJCDzLJ8nxb6qfuICc.jpeg) Nymph is the culmination of everything I have learned with the T-series project. This model aims to be a unique and full featured RP juggernaut. The finetune incorporates 1.6 Million tokens of RP data sourced from Bluemoon, FreedomRP, Aesir-Preview, and Claude Opus logs. I made sure to use the multi-turn sharegpt datasets this time instead of alpaca conversions. I have also included three of my personal datasets. The final touch is an ORPO based upon Openhermes Roleplay preferences.
nghuyong/ernie-3.0-base-zh
nghuyong
"2022-09-10T08:45:06Z"
9,744
78
transformers
[ "transformers", "pytorch", "ernie", "fill-mask", "zh", "arxiv:2107.02137", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-08-22T07:54:44Z"
--- language: zh --- # ERNIE-3.0-base-zh ## Introduction ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation More detail: https://arxiv.org/abs/2107.02137 ## Released Model Info This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and a series of experiments have been conducted to check the accuracy of the conversion. - Official PaddlePaddle ERNIE repo:https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch ## How to use Then you can load ERNIE-3.0 model as before: ```Python from transformers import BertTokenizer, ErnieForMaskedLM tokenizer = BertTokenizer.from_pretrained("nghuyong/ernie-3.0-base-zh") model = ErnieForMaskedLM.from_pretrained("nghuyong/ernie-3.0-base-zh") ``` ## Citation ```bibtex @article{sun2021ernie, title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation}, author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others}, journal={arXiv preprint arXiv:2107.02137}, year={2021} } ```
Lewdiculous/Aura_Uncensored_l3_8B-GGUF-IQ-Imatrix
Lewdiculous
"2024-04-30T21:52:17Z"
9,738
26
null
[ "gguf", "roleplay", "llama3", "sillytavern", "en", "base_model:Undi95/Llama-3-Unholy-8B", "license:apache-2.0", "region:us" ]
null
"2024-04-21T01:28:50Z"
--- base_model: - Undi95/Llama-3-Unholy-8B - Undi95/Llama-3-Unholy-8B - ResplendentAI/Aura_Llama3 - Undi95/Llama-3-Unholy-8B - ResplendentAI/RP_Format_QuoteAsterisk_Llama3 - Undi95/Llama-3-Unholy-8B - ResplendentAI/Luna_Llama3 - Undi95/Llama-3-Unholy-8B - ResplendentAI/Theory_of_Mind_Llama3 - Undi95/Llama-3-Unholy-8B - ResplendentAI/BlueMoon_Llama3 license: apache-2.0 tags: - roleplay - llama3 - sillytavern language: - en --- > [!CAUTION] > **Outdated:** <br> > Outdaded tokenizer configuration! <br> > This is only kept for historical purposes, use the newer models instead of this one. **This is a Llama-3 land now, cowboys!** This is another *better* atempt at a less censored Llama-3 with hopefully more stable formatting. GGUF-IQ-Imatrix quants for [ResplendentAI/Aura_Uncensored_l3_8B](https://huggingface.co/ResplendentAI/Aura_Uncensored_l3_8B). > [!WARNING] > Recommended presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here](https://huggingface.co/Virt-io/SillyTavern-Presets). <br> > Use the latest version of KoboldCpp. **Use the provided presets.** <br> > This is all still highly experimental, modified configs were used to avoid the tokenizer issues, let the authors know how it performs for you, feedback is more important than ever now. **Original model information:** # Aura Uncensored l3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/oiYHWIEHqmgUkY0GsVdDx.png) This is the culmination of all my efforts for the Aura line. I have taken the original training data and applied it over Undi95's Unholy base model. This model can and will provide unsafe information and RP. I strongly recommend that you do not use this model if you are sensitive to unsafe output. I have tested the model thoroughly and believe that it will please the majority of users. I hope that you enjoy this model.
QuantFactory/Llama-3-8B-WizardLM-196K-GGUF
QuantFactory
"2024-06-20T17:48:27Z"
9,738
0
null
[ "gguf", "axolotl", "generated_from_trainer", "text-generation", "base_model:Magpie-Align/Llama-3-8B-WizardLM-196K", "license:llama3", "region:us" ]
text-generation
"2024-06-20T11:25:56Z"
--- license: llama3 base_model: Magpie-Align/Llama-3-8B-WizardLM-196K tags: - axolotl - generated_from_trainer model-index: - name: Llama-3-8B-WizardLM-196K results: [] pipeline_tag: text-generation --- # QuantFactory/Llama-3-8B-WizardLM-196K-GGUF This is quantized version of [Magpie-Align/Llama-3-8B-WizardLM-196K](https://huggingface.co/Magpie-Align/Llama-3-8B-WizardLM-196K) created using llama.cpp # Model Description [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Leon-Leee/Wizardlm_Evol_Instruct_v2_196K_backuped type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./out_Llama-8B-WizardLM-196k sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true wandb_project: SynDa wandb_entity: wandb_watch: wandb_name: Llama-3-8B-WizardLM-196k wandb_log_model: hub_model_id: SynDa/Llama-3-8B-WizardLM-196K gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 3 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # Llama-3-8B-WizardLM-196K This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6077 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7323 | 0.0036 | 1 | 1.0826 | | 0.5934 | 0.3344 | 93 | 0.6450 | | 0.5497 | 0.6688 | 186 | 0.6192 | | 0.5295 | 1.0031 | 279 | 0.6059 | | 0.4664 | 1.3236 | 372 | 0.6103 | | 0.4729 | 1.6580 | 465 | 0.6077 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
timm/tinynet_e.in1k
timm
"2023-04-27T21:50:33Z"
9,737
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2010.14819", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:22:26Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tinynet_e.in1k A TinyNet image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 2.0 - GMACs: 0.0 - Activations (M): 0.7 - Image size: 106 x 106 - **Papers:** - Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tinynet_e.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tinynet_e.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 8, 53, 53]) # torch.Size([1, 16, 27, 27]) # torch.Size([1, 24, 14, 14]) # torch.Size([1, 56, 7, 7]) # torch.Size([1, 160, 4, 4]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tinynet_e.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 4, 4) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{han2020model, title={Model rubik’s cube: Twisting resolution, depth and width for tinynets}, author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong}, journal={Advances in Neural Information Processing Systems}, volume={33}, pages={19353--19364}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF
mradermacher
"2024-06-21T13:03:01Z"
9,734
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:Hastagaras/Rebahan-8B-L3-MK.II-Stories", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-21T12:33:15Z"
--- base_model: Hastagaras/Rebahan-8B-L3-MK.II-Stories language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Hastagaras/Rebahan-8B-L3-MK.II-Stories <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Rebahan-8B-L3-MK.II-Stories-GGUF/resolve/main/Rebahan-8B-L3-MK.II-Stories.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mistral_Factory_Dell_v0.1-GGUF
mradermacher
"2024-06-20T18:51:12Z"
9,732
0
transformers
[ "transformers", "gguf", "en", "base_model:donggyu64/Mistral_Factory_Dell_v0.1", "endpoints_compatible", "region:us" ]
null
"2024-06-20T18:25:08Z"
--- base_model: donggyu64/Mistral_Factory_Dell_v0.1 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/donggyu64/Mistral_Factory_Dell_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral_Factory_Dell_v0.1-GGUF/resolve/main/Mistral_Factory_Dell_v0.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/finance-Llama3-8B-GGUF
QuantFactory
"2024-06-22T10:08:34Z"
9,731
5
null
[ "gguf", "finance", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:2406.14491", "arxiv:2309.09530", "base_model:instruction-pretrain/finance-Llama3-8B", "license:llama3", "region:us" ]
text-generation
"2024-06-22T06:59:42Z"
--- license: llama3 language: - en tags: - finance datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k base_model: instruction-pretrain/finance-Llama3-8B pipeline_tag: text-generation --- # QuantFactory/finance-Llama3-8B-GGUF This is quantized version of [instruction-pretrain/finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) created using llama.cpp # Model Description ## Instruction Pre-Training: Language Models are Supervised Multitask Learners This repo contains the **finance model developed from Llama3-8B** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491). We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.** <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400"> </p> ## Resources **🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗** - Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) - Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection) - General Models Pre-Trained from Scratch: - [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M) - [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B) - Domain-Specific Models Pre-Trained from Llama3-8B: - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) ## Domain-Adaptive Continued Pre-Training Following [AdaptLLM](https://huggingface.co/AdaptLLM/finance-chat), we augment the domain-specific raw corpora with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer). For example, to chat with the finance-Llama3-8B model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("instruction-pretrain/finance-Llama3-8B") tokenizer = AutoTokenizer.from_pretrained("instruction-pretrain/finance-Llama3-8B") # Put your input here, NO prompt template is required user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange MMM Chicago Stock Exchange, Inc. 1.500% Notes due 2026 MMM26 New York Stock Exchange 1.750% Notes due 2030 MMM30 New York Stock Exchange 1.500% Notes due 2031 MMM31 New York Stock Exchange Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?''' inputs = tokenizer(user_input, return_tensors="pt", add_special_tokens=True).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_new_tokens=400)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(pred) ``` ## Model Citation If you find our work helpful, please cite us: [AdaptLLM](https://huggingface.co/papers/2309.09530) ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```
Helsinki-NLP/opus-mt-en-jap
Helsinki-NLP
"2023-08-16T11:30:07Z"
9,728
5
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "jap", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-jap * source languages: en * target languages: jap * OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.jap | 42.1 | 0.960 |
timm/densenet121.ra_in1k
timm
"2023-04-21T22:51:56Z"
9,718
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:1608.06993", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-21T22:51:48Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for densenet121.ra_in1k A DenseNet image classification model. Pretrained on ImageNet-1k in `timm` by Ross Wightman using RandAugment `RA` recipe. Related to `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 8.0 - GMACs: 2.9 - Activations (M): 6.9 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Densely Connected Convolutional Networks: https://arxiv.org/abs/1608.06993 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('densenet121.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'densenet121.ra_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'densenet121.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @inproceedings{huang2017densely, title={Densely Connected Convolutional Networks}, author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q }, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, year={2017} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
mradermacher/L3-Nymeria-Maid-8B-GGUF
mradermacher
"2024-06-22T22:20:57Z"
9,713
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "roleplay", "sillytavern", "llama3", "not-for-all-audiences", "en", "base_model:tannedbum/L3-Nymeria-Maid-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-22T21:53:24Z"
--- base_model: tannedbum/L3-Nymeria-Maid-8B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge - roleplay - sillytavern - llama3 - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Nymeria-Maid-8B-GGUF/resolve/main/L3-Nymeria-Maid-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Ramses-IV-GGUF
mradermacher
"2024-06-24T22:11:51Z"
9,713
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:CoprolaliacPress/Ramses-IV", "endpoints_compatible", "region:us" ]
null
"2024-06-24T21:22:39Z"
--- base_model: CoprolaliacPress/Ramses-IV language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/CoprolaliacPress/Ramses-IV <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Ramses-IV-GGUF/resolve/main/Ramses-IV.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Shark-1.1-GGUF
mradermacher
"2024-06-26T10:46:31Z"
9,689
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "GritLM/GritLM-7B", "jan-hq/trinity-v1", "GreenNode/GreenNode-mini-7B-multilingual-v1olet", "en", "base_model:powermove72/Shark-1.1", "endpoints_compatible", "region:us" ]
null
"2024-06-26T09:48:14Z"
--- base_model: powermove72/Shark-1.1 language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - GritLM/GritLM-7B - jan-hq/trinity-v1 - GreenNode/GreenNode-mini-7B-multilingual-v1olet --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/Shark-1.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1.1-GGUF/resolve/main/Shark-1.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ZeroWw/Mistral-7B-Instruct-v0.3-GGUF
ZeroWw
"2024-06-20T23:20:47Z"
9,688
1
null
[ "gguf", "en", "license:mit", "region:us" ]
null
"2024-06-18T16:13:14Z"
--- license: mit language: - en --- My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
xinsir/controlnet-canny-sdxl-1.0
xinsir
"2024-05-14T16:12:12Z"
9,686
118
diffusers
[ "diffusers", "safetensors", "text_to_image", "controlnet", "controlnet-canny-sdxl-1.0", "arxiv:2302.05543", "license:apache-2.0", "region:us" ]
null
"2024-05-10T14:15:32Z"
--- license: apache-2.0 tags: - text_to_image - diffusers - controlnet - controlnet-canny-sdxl-1.0 --- # ***Drawing like Midjourney! Come on!*** ![images](./masonry.webp) # Controlnet-Canny-Sdxl-1.0 <!-- Provide a quick summary of what the model is/does. --> Hello, I am very happy to announce the controlnet-canny-sdxl-1.0 model, **a very powerful controlnet that can generate high resolution images visually comparable with midjourney**. The model was trained with large amount of high quality data(over 10000000 images), with carefully filtered and captioned(powerful vllm model). Besides, useful tricks are applied during the training, including date augmentation, mutiple loss and multi resolution. With only 1 stage training, the performance outperforms the other opensource canny models ([diffusers/controlnet-canny-sdxl-1.0], [TheMistoAI/MistoLine]). I release it and hope to advance the application of stable diffusion models. Canny is one of the most important ControlNet series models and can be applied to many jobs associated with drawing and designing. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** xinsir - **Model type:** ControlNet_SDXL - **License:** apache-2.0 - **Finetuned from model [optional]:** stabilityai/stable-diffusion-xl-base-1.0 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Paper [optional]:** https://arxiv.org/abs/2302.05543 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Examples prompt: A closeup of two day of the dead models, looking to the side, large flowered headdress, full dia de Los muertoe make up, lush red lips, butterflies, flowers, pastel colors, looking to the side, jungle, birds, color harmony , extremely detailed, intricate, ornate, motion, stunning, beautiful, unique, soft lighting ![images_0)](./000031_scribble_concat.webp) prompt: ghost with a plague doctor mask in a venice carnaval hyper realistic ![images_1)](./000028_scribble_concat.webp) prompt: A picture surrounded by blue stars and gold stars, glowing, dark navy blue and gray tones, distributed in light silver and gold, playful, festive atmosphere, pure fabric, chalk, FHD 8K ![images_2)](./000016_scribble_concat.webp) prompt: Delicious vegetarian pizza with champignon mushrooms, tomatoes, mozzarella, peppers and black olives, isolated on white background , transparent isolated white background , top down view, studio photo, transparent png, Clean sharp focus. High end retouching. Food magazine photography. Award winning photography. Advertising photography. Commercial photography ![images_3)](./000010_scribble_concat.webp) prompt: a blonde woman in a wedding dress in a maple forest in summer with a flower crown laurel. Watercolor painting in the style of John William Waterhouse. Romanticism. Ethereal light. ![images_4)](./000006_scribble_concat.webp) ### Examples Anime(Note that you need to change the base model to CounterfeitXL, others remains the same) ![images_5)](./000013_scribble_concat.webp) ![images_6)](./000034_scribble_concat.webp) ![images_7)](./000059_scribble_concat.webp) ![images_8)](./000078_scribble_concat.webp) ![images_9)](./000097_scribble_concat.webp) ## How to Get Started with the Model Use the code below to get started with the model. ```python from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler from PIL import Image import torch import numpy as np import cv2 def HWC3(x): assert x.dtype == np.uint8 if x.ndim == 2: x = x[:, :, None] assert x.ndim == 3 H, W, C = x.shape assert C == 1 or C == 3 or C == 4 if C == 3: return x if C == 1: return np.concatenate([x, x, x], axis=2) if C == 4: color = x[:, :, 0:3].astype(np.float32) alpha = x[:, :, 3:4].astype(np.float32) / 255.0 y = color * alpha + 255.0 * (1.0 - alpha) y = y.clip(0, 255).astype(np.uint8) return y controlnet_conditioning_scale = 1.0 prompt = "your prompt, the longer the better, you can describe it as detail as possible" negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler") controlnet = ControlNetModel.from_pretrained( "xinsir/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 ) # when test with other base model, you need to change the vae also. vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, safety_checker=None, torch_dtype=torch.float16, scheduler=eulera_scheduler, ) # need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance controlnet_img = cv2.imread("your image path") height, width, _ = controlnet_img.shape ratio = np.sqrt(1024. * 1024. / (width * height)) new_width, new_height = int(width * ratio), int(height * ratio) controlnet_img = cv2.resize(controlnet_img, (new_width, new_height)) controlnet_img = cv2.Canny(controlnet_img, 100, 200) controlnet_img = HWC3(controlnet_img) controlnet_img = Image.fromarray(controlnet_img) images = pipe( prompt, negative_prompt=negative_prompt, image=controlnet_img, controlnet_conditioning_scale=controlnet_conditioning_scale, width=new_width, height=new_height, num_inference_steps=30, ).images images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger") ``` ## Evaluation Metric 1 Laion Aesthetic Score [https://laion.ai/blog/laion-aesthetics/] 2 PerceptualSimilarity [https://github.com/richzhang/PerceptualSimilarity] ## Evaluation Data The test data is randomly sample from midjourney upscale images with prompts, as the purpose of the project is to letting people draw images like midjourney. midjourney’s users include a large number of professional designers, and the upscale image tend to have more beauty score and prompt consistency, it is suitable to use it as the test set to judge the ability of controlnet. We select 300 prompt-image pairs randomly and generate 4 images per prompt, totally 1200 images generated. We caculate the Laion Aesthetic Score to measure the beauty and the PerceptualSimilarity to measure the control ability, we find the quality of images have a good consistency with the meric values. We compare our methods with other SOTA huggingface models and list the result below. We are the models that have highest aesthectic score, and can generate visually appealing images if you prompt it properly. ## Quantitative Result | metric | xinsir/controlnet-canny-sdxl-1.0 | diffusers/controlnet-canny-sdxl-1.0 | TheMistoAI/MistoLine | |-------|-------|-------|-------| | laion_aesthetic | **6.03** | 5.93 | 5.82 | | perceptual similarity | **0.4200** | 0.5053 | 0.5387 | laion_aesthetic(the higher the better) perceptual similarity(the lower the better) Note: The values are calculate when saved in webp format, if you save in png format the aesthetic values will increase 0.1-0.3 but the relative relation remains unchanged. ## Training Details The model is trained using high quality data, only 1 stage training, the resolution setting is the same with sdxl-base, 1024*1024. We use random threshold to generate canny images like lvming zhang, It is essential to find proper hyerparameters to realize data augmentation, too easy or too hard will hurt the model performance. Besides, we use random mask to random mask out a random percentage of canny images to force the model to learn more semantic meaning between the prompt and the line. We use over 10000000 images, which are annotated carefully, cogvlm is proved to be a powerful image caption model[https://github.com/THUDM/CogVLM?tab=readme-ov-file]. For comic images, it is recommened to use waifu tagger to generate special tags [https://huggingface.co/spaces/SmilingWolf/wd-tagger]. More than 64 A100s are used to train the model and the real batch size is 2560 when used accumulate_grad_batches. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The data consists of many sources, including midjourney, laion 5B, danbooru, and so on. The data is carefully filtered and annotated. ### Conclusion <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> In our evaluation, the model got better aesthetic score in real images compared with stabilityai/stable-diffusion-xl-base-1.0, and comparable performance in cartoon sytle images. The model is better in control ability when test with perception similarity due to more strong data augmentation and more training steps. Besides, the model has lower rate to generate abnormal images which tend to include some abnormal human structure.
timm/coatnext_nano_rw_224.sw_in1k
timm
"2023-05-10T23:51:19Z"
9,685
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
"2023-01-20T21:29:41Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for coatnext_nano_rw_224.sw_in1k A timm specific CoAtNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 14.7 - GMACs: 2.5 - Activations (M): 12.8 - Image size: 224 x 224 - **Papers:** - CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('coatnext_nano_rw_224.sw_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'coatnext_nano_rw_224.sw_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'coatnext_nano_rw_224.sw_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
cledoux42/GenderNew_v002
cledoux42
"2023-04-17T21:51:33Z"
9,677
4
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-04-17T21:51:17Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: GenderNew_v002 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9855228066444397 --- # GenderNew_v002 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### man ![man](images/man.jpg) #### woman ![woman](images/woman.jpg)
mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF
mradermacher
"2024-06-26T20:31:36Z"
9,676
0
transformers
[ "transformers", "gguf", "en", "base_model:Multilingual-Multimodal-NLP/SEVENLLM-Llama2-7B-CoT", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-21T13:03:49Z"
--- base_model: Multilingual-Multimodal-NLP/SEVENLLM-Llama2-7B-CoT language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Multilingual-Multimodal-NLP/SEVENLLM-Llama2-7B-CoT <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SEVENLLM-Llama2-7B-CoT-GGUF/resolve/main/SEVENLLM-Llama2-7B-CoT.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Davlan/afro-xlmr-large-29L
Davlan
"2023-09-07T14:38:07Z"
9,672
0
transformers
[ "transformers", "pytorch", "jax", "xlm-roberta", "fill-mask", "generated_from_trainer", "en", "fr", "ar", "ha", "ig", "yo", "rn", "rw", "sn", "xh", "zu", "om", "am", "so", "st", "ny", "mg", "sw", "af", "pcm", "pt", "ee", "ln", "tw", "ts", "ti", "tn", "arz", "nso", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-06-10T18:15:31Z"
--- license: mit tags: - generated_from_trainer model-index: - name: afro-xlmr-large-29L results: [] language: - en - fr - ar - ha - ig - yo - rn - rw - sn - xh - zu - om - am - so - st - ny - mg - sw - af - pcm - pt - ee - ln - tw - ts - ti - tn - arz - nso --- # afro-xlmr-large-29L AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 25 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, isiZulu, Ewe, Lingala, Twi, Tsonga, Tigrinya, Setswana, Egyptian Arabic, and Northern Sotho) covering the major African language families and 4 high-resource languages (Arabic, French, and English, Portuguese). ### Acknowledgment We would like to thank Google Cloud for providing us access to TPU v3-8 through the free cloud credits. Model trained using flax, and converted to pytorch. ### BibTeX entry and citation info. ``` @inproceedings{alabi-etal-2022-adapting, title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning", author = "Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.382", pages = "4336--4349", abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.", } ```
MaziyarPanahi/Qwen2-7B-Instruct-v0.1-GGUF
MaziyarPanahi
"2024-06-27T14:58:20Z"
9,667
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "qwen", "qwen-2", "base_model:MaziyarPanahi/Qwen2-7B-Instruct-v0.1", "text-generation-inference", "region:us" ]
text-generation
"2024-06-27T14:19:56Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - qwen - qwen-2 - text-generation model_name: Qwen2-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Qwen2-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Qwen2-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Qwen2-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Qwen2-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Qwen2-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.1). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
apple/mobilevit-x-small
apple
"2022-08-29T07:57:54Z"
9,659
3
transformers
[ "transformers", "pytorch", "tf", "coreml", "mobilevit", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2110.02178", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-05-30T12:45:19Z"
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileViT (extra small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-x-small") model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-x-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes. ## Training procedure ### Preprocessing Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping. To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320). At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |------------------|-------------------------|-------------------------|-----------|-------------------------------------------------| | MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/apple/mobilevit-xx-small | | **MobileViT-XS** | **74.8** | **92.3** | **2.3 M** | https://huggingface.co/apple/mobilevit-x-small | | MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
solidrust/Hermes-2-Pro-Llama-3-8B-AWQ
solidrust
"2024-05-03T00:09:40Z"
9,656
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
"2024-05-01T23:26:09Z"
--- library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. pipeline_tag: text-generation inference: false quantized_by: Suparious --- # NousResearch/Hermes-2-Pro-Llama-3-8B AWQ - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Hermes-2-Pro-Llama-3-8B-AWQ" system_message = "You are Hermes-2-Pro-Llama-3-8B, incarnated as a powerful AI. You were created by NousResearch." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
jameslahm/yolov10s
jameslahm
"2024-06-03T13:26:55Z"
9,650
5
transformers
[ "transformers", "safetensors", "object-detection", "computer-vision", "yolov10", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
object-detection
"2024-06-01T10:40:50Z"
--- license: agpl-3.0 tags: - object-detection - computer-vision - yolov10 datasets: - detection-datasets/coco inference: false --- ### Model Description [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1) - arXiv: https://arxiv.org/abs/2405.14458v1 - github: https://github.com/THU-MIG/yolov10 ### Installation ``` pip install git+https://github.com/THU-MIG/yolov10.git ``` ### Training and validation ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10s') # Training model.train(...) # after training, one can push to the hub model.push_to_hub("your-hf-username/yolov10-finetuned") # Validation model.val(...) ``` ### Inference Here's an end-to-end example showcasing inference on a cats image: ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10s') source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) ``` which shows: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628ece6054698ce61d1e7be3/33BsCwWkygl6cEHQHAjjH.png) ### BibTeX Entry and Citation Info ``` @article{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang}, journal={arXiv preprint arXiv:2405.14458}, year={2024} } ```
mradermacher/Llama3-8B-Ai4RD-GGUF
mradermacher
"2024-06-20T03:48:44Z"
9,649
0
transformers
[ "transformers", "gguf", "en", "base_model:zhouwenzhong/Llama3-8B-Ai4RD", "endpoints_compatible", "region:us" ]
null
"2024-06-20T03:20:17Z"
--- base_model: zhouwenzhong/Llama3-8B-Ai4RD language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zhouwenzhong/Llama3-8B-Ai4RD <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-Ai4RD-GGUF/resolve/main/Llama3-8B-Ai4RD.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Llama-3-Taiwan-8B-Instruct-GGUF
QuantFactory
"2024-07-02T06:35:41Z"
9,647
0
null
[ "gguf", "region:us" ]
null
"2024-07-02T05:06:07Z"
Entry not found
unsloth/llama-3-70b-Instruct-bnb-4bit
unsloth
"2024-05-16T15:54:32Z"
9,637
34
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "llama-3", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-19T17:39:17Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - llama - llama-3 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF
mradermacher
"2024-06-21T18:11:57Z"
9,630
1
transformers
[ "transformers", "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "en", "dataset:teknium/OpenHermes-2.5", "base_model:OpenPipe/Hermes-2-Theta-Llama-3-8B-32k", "endpoints_compatible", "region:us" ]
null
"2024-06-21T12:34:17Z"
--- base_model: OpenPipe/Hermes-2-Theta-Llama-3-8B-32k datasets: - teknium/OpenHermes-2.5 language: - en library_name: transformers quantized_by: mradermacher tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenPipe/Hermes-2-Theta-Llama-3-8B-32k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit
astronomer
"2024-04-22T01:34:29Z"
9,627
23
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "facebook", "meta", "astronomer", "gptq", "pretrained", "quantized", "finetuned", "autotrain_compatible", "endpoints_compatible", "conversational", "dataset:wikitext", "arxiv:2210.17323", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-19T15:55:54Z"
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct inference: false model_creator: astronomer-io model_name: Meta-Llama-3-8B-Instruct model_type: llama pipeline_tag: text-generation prompt_template: "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}" quantized_by: davidxmle license: other license_name: llama-3-community-license license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE tags: - llama - llama-3 - facebook - meta - astronomer - gptq - pretrained - quantized - finetuned - autotrain_compatible - endpoints_compatible datasets: - wikitext --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;"> </div> <div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama-3-8B-Instruct-GPTQ-4-Bit - Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama) - Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - Built with Meta Llama 3 - Quantized by [Astronomer](https://astronomer.io) # Important Note About Serving with vLLM & oobabooga/text-generation-webui - For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue. - vLLM does not yet respect `generation_config.json`. - vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180 - For oobabooga/text-generation-webui - Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library. - Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect) - Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field <!-- description start --> ## Description This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc). The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput. <!-- description end --> ## GPTQ Quantization Method - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323) - Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss. | Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss | | More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. | ## Serving this GPTQ model using vLLM Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM). Tested with the below command ``` python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16 ``` For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint Example: ```json { "model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who created Llama 3?"} ], "max_tokens": 2000, "stop_token_ids":[128001,128009] } ``` ### Prompt Template ``` <|begin_of_text|><|start_header_id|>user<|end_header_id|> {{prompt}}<|eot_id|> <|start_header_id|>assistant<|end_header_id|> ``` ### Contributors - Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)
QuantFactory/Qwen2-7B-Multilingual-RP-GGUF
QuantFactory
"2024-06-26T02:00:36Z"
9,626
0
null
[ "gguf", "region:us" ]
null
"2024-06-26T01:04:03Z"
Entry not found
laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K
laion
"2023-04-18T22:04:49Z"
9,623
5
open_clip
[ "open_clip", "tensorboard", "safetensors", "clip", "zero-shot-image-classification", "arxiv:2201.03545", "arxiv:1910.04867", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-01-03T00:25:22Z"
--- license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification tags: - clip --- # Model Card for CLIP-convnext_base_w.laion_aesthetic-s13B-b82k # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Goals: * Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution Firsts: * First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models * First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth) The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320. All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 | | [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 | | [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 | | [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 | | [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with one of (see table in intro): * LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). * LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training. For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS. ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_256" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --warmup 10000 \ --batch-size=512 \ --epochs=64 \ --dataset-resampled \ --clip-grad-norm 5.0 \ --lr 1e-3 \ --workers=6 \ --model "convnext_base_w" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs. # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k. ![](convnext_base_w_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384). # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
TheBloke/Luna-AI-Llama2-Uncensored-GGUF
TheBloke
"2023-09-27T12:48:10Z"
9,621
32
transformers
[ "transformers", "gguf", "llama", "base_model:Tap-M/Luna-AI-Llama2-Uncensored", "license:cc-by-sa-4.0", "text-generation-inference", "region:us" ]
null
"2023-09-06T02:12:58Z"
--- license: cc-by-sa-4.0 model_name: Luna AI Llama2 Uncensored base_model: Tap-M/Luna-AI-Llama2-Uncensored inference: false model_creator: Tap model_type: llama prompt_template: 'USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Luna AI Llama2 Uncensored - GGUF - Model creator: [Tap](https://huggingface.co/Tap-M) - Original model: [Luna AI Llama2 Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored) <!-- description start --> ## Description This repo contains GGUF format model files for [Tap-M's Luna AI Llama2 Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored). Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files! <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF) * [Tap's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant ``` USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-sa-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Tap-M's Luna AI Llama2 Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [luna-ai-llama2-uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [luna-ai-llama2-uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [luna-ai-llama2-uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [luna-ai-llama2-uncensored.Q3_K_L.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [luna-ai-llama2-uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [luna-ai-llama2-uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [luna-ai-llama2-uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [luna-ai-llama2-uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [luna-ai-llama2-uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [luna-ai-llama2-uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [luna-ai-llama2-uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [luna-ai-llama2-uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/blob/main/luna-ai-llama2-uncensored.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Luna-AI-Llama2-Uncensored-GGUF and below it, a specific filename to download, such as: luna-ai-llama2-uncensored.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Luna-AI-Llama2-Uncensored-GGUF luna-ai-llama2-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Luna-AI-Llama2-Uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Luna-AI-Llama2-Uncensored-GGUF luna-ai-llama2-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m luna-ai-llama2-uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Luna-AI-Llama2-Uncensored-GGUF", model_file="luna-ai-llama2-uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tap-M's Luna AI Llama2 Uncensored <div style="width: 800px; margin: auto;"> <h2>Model Description</h2> <p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br /> This model was fine-tuned by Tap, the creator of Luna AI. <br /> <h2>Model Training</h2> <p>The fine-tuning process was performed on an 8x a100 80GB machine. <br />The model was trained on synthetic outputs which include multiple rounds of chats between Human & AI. </p> <a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GPTQ">4bit GPTQ Version provided by @TheBloke - for GPU inference</a><br /> <a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGML">GGML Version provided by @TheBloke - For CPU inference</a> <h2>Prompt Format</h2> <p>The model follows the Vicuna 1.1/ OpenChat format:</p> ``` USER: I have difficulties in making friends, and I really need someone to talk to. Would you be my friend? ASSISTANT: Of course! Friends are always here for each other. What do you like to do? ``` <h2>Benchmark Results</h2> |||||| |---:|---:|---:|---:|---:| |Task|Version| Metric |Value |Stderr| |arc_challenge|0|acc_norm|0.5512|0.0146| |hellaswag|0|||| |mmlu|1|acc_norm|0.46521|0.036| |truthfulqa_mc|1|mc2|0.4716|0.0155| |Average|-|-|0.5114|0.0150| </div> <!-- original-model-card end -->
jinaai/jina-embeddings-v2-base-es
jinaai
"2024-04-22T14:31:39Z"
9,620
17
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "es", "en", "arxiv:2108.12409", "arxiv:2402.17016", "license:apache-2.0", "model-index", "region:eu" ]
feature-extraction
"2024-01-24T09:54:03Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb language: - es - en inference: false license: apache-2.0 model-index: - name: jina-embeddings-v2-base-es results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.25373134328358 - type: ap value: 37.05201236793268 - type: f1 value: 68.16770391201077 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 78.30885 - type: ap value: 73.01622441156408 - type: f1 value: 78.20769284466313 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.324 - type: f1 value: 37.89543008761673 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.678000000000004 - type: f1 value: 38.122639506976 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 23.968999999999998 - type: map_at_10 value: 40.691 - type: map_at_100 value: 41.713 - type: map_at_1000 value: 41.719 - type: map_at_3 value: 35.42 - type: map_at_5 value: 38.442 - type: mrr_at_1 value: 24.395 - type: mrr_at_10 value: 40.853 - type: mrr_at_100 value: 41.869 - type: mrr_at_1000 value: 41.874 - type: mrr_at_3 value: 35.68 - type: mrr_at_5 value: 38.572 - type: ndcg_at_1 value: 23.968999999999998 - type: ndcg_at_10 value: 50.129999999999995 - type: ndcg_at_100 value: 54.364000000000004 - type: ndcg_at_1000 value: 54.494 - type: ndcg_at_3 value: 39.231 - type: ndcg_at_5 value: 44.694 - type: precision_at_1 value: 23.968999999999998 - type: precision_at_10 value: 8.036999999999999 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.761 - type: precision_at_5 value: 12.717 - type: recall_at_1 value: 23.968999999999998 - type: recall_at_10 value: 80.36999999999999 - type: recall_at_100 value: 98.578 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 50.28399999999999 - type: recall_at_5 value: 63.585 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 41.54886683150053 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.186028697637234 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.19432643698725 - type: mrr value: 75.28646176845622 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.3828259381228 - type: cos_sim_spearman value: 83.04647058342209 - type: euclidean_pearson value: 84.02895346096244 - type: euclidean_spearman value: 82.34524978635342 - type: manhattan_pearson value: 84.35030723233426 - type: manhattan_spearman value: 83.17177464337936 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.25649350649351 - type: f1 value: 85.22320474023192 - task: type: Clustering dataset: type: jinaai/big-patent-clustering name: MTEB BigPatentClustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 20.42929408254094 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.165318177498136 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.89030154229562 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.119 - type: map_at_10 value: 42.092 - type: map_at_100 value: 43.506 - type: map_at_1000 value: 43.631 - type: map_at_3 value: 38.373000000000005 - type: map_at_5 value: 40.501 - type: mrr_at_1 value: 38.196999999999996 - type: mrr_at_10 value: 48.237 - type: mrr_at_100 value: 48.914 - type: mrr_at_1000 value: 48.959 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 47.11 - type: ndcg_at_1 value: 38.196999999999996 - type: ndcg_at_10 value: 48.849 - type: ndcg_at_100 value: 53.713 - type: ndcg_at_1000 value: 55.678000000000004 - type: ndcg_at_3 value: 43.546 - type: ndcg_at_5 value: 46.009 - type: precision_at_1 value: 38.196999999999996 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.5190000000000001 - type: precision_at_1000 value: 0.199 - type: precision_at_3 value: 21.65 - type: precision_at_5 value: 15.708 - type: recall_at_1 value: 30.119 - type: recall_at_10 value: 61.788 - type: recall_at_100 value: 82.14399999999999 - type: recall_at_1000 value: 95.003 - type: recall_at_3 value: 45.772 - type: recall_at_5 value: 53.04600000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.979 - type: map_at_10 value: 37.785000000000004 - type: map_at_100 value: 38.945 - type: map_at_1000 value: 39.071 - type: map_at_3 value: 35.083999999999996 - type: map_at_5 value: 36.571999999999996 - type: mrr_at_1 value: 36.242000000000004 - type: mrr_at_10 value: 43.552 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.275999999999996 - type: mrr_at_3 value: 41.359 - type: mrr_at_5 value: 42.598 - type: ndcg_at_1 value: 36.242000000000004 - type: ndcg_at_10 value: 42.94 - type: ndcg_at_100 value: 47.343 - type: ndcg_at_1000 value: 49.538 - type: ndcg_at_3 value: 39.086999999999996 - type: ndcg_at_5 value: 40.781 - type: precision_at_1 value: 36.242000000000004 - type: precision_at_10 value: 7.954999999999999 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 18.556 - type: precision_at_5 value: 13.145999999999999 - type: recall_at_1 value: 28.979 - type: recall_at_10 value: 51.835 - type: recall_at_100 value: 70.47 - type: recall_at_1000 value: 84.68299999999999 - type: recall_at_3 value: 40.410000000000004 - type: recall_at_5 value: 45.189 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 37.878 - type: map_at_10 value: 49.903 - type: map_at_100 value: 50.797000000000004 - type: map_at_1000 value: 50.858000000000004 - type: map_at_3 value: 46.526 - type: map_at_5 value: 48.615 - type: mrr_at_1 value: 43.135 - type: mrr_at_10 value: 53.067 - type: mrr_at_100 value: 53.668000000000006 - type: mrr_at_1000 value: 53.698 - type: mrr_at_3 value: 50.449 - type: mrr_at_5 value: 52.117000000000004 - type: ndcg_at_1 value: 43.135 - type: ndcg_at_10 value: 55.641 - type: ndcg_at_100 value: 59.427 - type: ndcg_at_1000 value: 60.655 - type: ndcg_at_3 value: 49.969 - type: ndcg_at_5 value: 53.075 - type: precision_at_1 value: 43.135 - type: precision_at_10 value: 8.997 - type: precision_at_100 value: 1.1809999999999998 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.215 - type: precision_at_5 value: 15.586 - type: recall_at_1 value: 37.878 - type: recall_at_10 value: 69.405 - type: recall_at_100 value: 86.262 - type: recall_at_1000 value: 95.012 - type: recall_at_3 value: 54.458 - type: recall_at_5 value: 61.965 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.853 - type: map_at_10 value: 32.402 - type: map_at_100 value: 33.417 - type: map_at_1000 value: 33.498 - type: map_at_3 value: 30.024 - type: map_at_5 value: 31.407 - type: mrr_at_1 value: 26.667 - type: mrr_at_10 value: 34.399 - type: mrr_at_100 value: 35.284 - type: mrr_at_1000 value: 35.345 - type: mrr_at_3 value: 32.109 - type: mrr_at_5 value: 33.375 - type: ndcg_at_1 value: 26.667 - type: ndcg_at_10 value: 36.854 - type: ndcg_at_100 value: 42.196 - type: ndcg_at_1000 value: 44.303 - type: ndcg_at_3 value: 32.186 - type: ndcg_at_5 value: 34.512 - type: precision_at_1 value: 26.667 - type: precision_at_10 value: 5.559 - type: precision_at_100 value: 0.88 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.333 - type: precision_at_5 value: 9.379 - type: recall_at_1 value: 24.853 - type: recall_at_10 value: 48.636 - type: recall_at_100 value: 73.926 - type: recall_at_1000 value: 89.94 - type: recall_at_3 value: 36.266 - type: recall_at_5 value: 41.723 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.963999999999999 - type: map_at_10 value: 22.591 - type: map_at_100 value: 23.735999999999997 - type: map_at_1000 value: 23.868000000000002 - type: map_at_3 value: 20.093 - type: map_at_5 value: 21.499 - type: mrr_at_1 value: 18.407999999999998 - type: mrr_at_10 value: 26.863 - type: mrr_at_100 value: 27.87 - type: mrr_at_1000 value: 27.947 - type: mrr_at_3 value: 24.254 - type: mrr_at_5 value: 25.784000000000002 - type: ndcg_at_1 value: 18.407999999999998 - type: ndcg_at_10 value: 27.549 - type: ndcg_at_100 value: 33.188 - type: ndcg_at_1000 value: 36.312 - type: ndcg_at_3 value: 22.862 - type: ndcg_at_5 value: 25.130999999999997 - type: precision_at_1 value: 18.407999999999998 - type: precision_at_10 value: 5.087 - type: precision_at_100 value: 0.923 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.987 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 14.963999999999999 - type: recall_at_10 value: 38.673 - type: recall_at_100 value: 63.224999999999994 - type: recall_at_1000 value: 85.443 - type: recall_at_3 value: 25.840000000000003 - type: recall_at_5 value: 31.503999999999998 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.861000000000004 - type: map_at_10 value: 37.562 - type: map_at_100 value: 38.906 - type: map_at_1000 value: 39.021 - type: map_at_3 value: 34.743 - type: map_at_5 value: 36.168 - type: mrr_at_1 value: 34.455999999999996 - type: mrr_at_10 value: 43.428 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.278 - type: mrr_at_3 value: 41.001 - type: mrr_at_5 value: 42.315000000000005 - type: ndcg_at_1 value: 34.455999999999996 - type: ndcg_at_10 value: 43.477 - type: ndcg_at_100 value: 48.953 - type: ndcg_at_1000 value: 51.19200000000001 - type: ndcg_at_3 value: 38.799 - type: ndcg_at_5 value: 40.743 - type: precision_at_1 value: 34.455999999999996 - type: precision_at_10 value: 7.902000000000001 - type: precision_at_100 value: 1.244 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 12.859000000000002 - type: recall_at_1 value: 27.861000000000004 - type: recall_at_10 value: 55.36 - type: recall_at_100 value: 78.384 - type: recall_at_1000 value: 93.447 - type: recall_at_3 value: 41.926 - type: recall_at_5 value: 47.257 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.375 - type: map_at_10 value: 35.571000000000005 - type: map_at_100 value: 36.785000000000004 - type: map_at_1000 value: 36.905 - type: map_at_3 value: 32.49 - type: map_at_5 value: 34.123999999999995 - type: mrr_at_1 value: 32.647999999999996 - type: mrr_at_10 value: 40.598 - type: mrr_at_100 value: 41.484 - type: mrr_at_1000 value: 41.546 - type: mrr_at_3 value: 37.9 - type: mrr_at_5 value: 39.401 - type: ndcg_at_1 value: 32.647999999999996 - type: ndcg_at_10 value: 41.026 - type: ndcg_at_100 value: 46.365 - type: ndcg_at_1000 value: 48.876 - type: ndcg_at_3 value: 35.843 - type: ndcg_at_5 value: 38.118 - type: precision_at_1 value: 32.647999999999996 - type: precision_at_10 value: 7.443 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 16.819 - type: precision_at_5 value: 11.985999999999999 - type: recall_at_1 value: 26.375 - type: recall_at_10 value: 52.471000000000004 - type: recall_at_100 value: 75.354 - type: recall_at_1000 value: 92.35 - type: recall_at_3 value: 37.893 - type: recall_at_5 value: 43.935 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.012666666666668 - type: map_at_10 value: 33.685833333333335 - type: map_at_100 value: 34.849250000000005 - type: map_at_1000 value: 34.970083333333335 - type: map_at_3 value: 31.065083333333334 - type: map_at_5 value: 32.494416666666666 - type: mrr_at_1 value: 29.772666666666662 - type: mrr_at_10 value: 37.824666666666666 - type: mrr_at_100 value: 38.66741666666666 - type: mrr_at_1000 value: 38.72916666666666 - type: mrr_at_3 value: 35.54575 - type: mrr_at_5 value: 36.81524999999999 - type: ndcg_at_1 value: 29.772666666666662 - type: ndcg_at_10 value: 38.78241666666666 - type: ndcg_at_100 value: 43.84591666666667 - type: ndcg_at_1000 value: 46.275416666666665 - type: ndcg_at_3 value: 34.33416666666667 - type: ndcg_at_5 value: 36.345166666666664 - type: precision_at_1 value: 29.772666666666662 - type: precision_at_10 value: 6.794916666666667 - type: precision_at_100 value: 1.106416666666667 - type: precision_at_1000 value: 0.15033333333333335 - type: precision_at_3 value: 15.815083333333336 - type: precision_at_5 value: 11.184166666666664 - type: recall_at_1 value: 25.012666666666668 - type: recall_at_10 value: 49.748500000000014 - type: recall_at_100 value: 72.11341666666667 - type: recall_at_1000 value: 89.141 - type: recall_at_3 value: 37.242999999999995 - type: recall_at_5 value: 42.49033333333333 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.177 - type: map_at_10 value: 29.310000000000002 - type: map_at_100 value: 30.188 - type: map_at_1000 value: 30.29 - type: map_at_3 value: 27.356 - type: map_at_5 value: 28.410999999999998 - type: mrr_at_1 value: 26.074 - type: mrr_at_10 value: 32.002 - type: mrr_at_100 value: 32.838 - type: mrr_at_1000 value: 32.909 - type: mrr_at_3 value: 30.317 - type: mrr_at_5 value: 31.222 - type: ndcg_at_1 value: 26.074 - type: ndcg_at_10 value: 32.975 - type: ndcg_at_100 value: 37.621 - type: ndcg_at_1000 value: 40.253 - type: ndcg_at_3 value: 29.452 - type: ndcg_at_5 value: 31.020999999999997 - type: precision_at_1 value: 26.074 - type: precision_at_10 value: 5.077 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.526000000000002 - type: precision_at_5 value: 8.588999999999999 - type: recall_at_1 value: 23.177 - type: recall_at_10 value: 41.613 - type: recall_at_100 value: 63.287000000000006 - type: recall_at_1000 value: 83.013 - type: recall_at_3 value: 31.783 - type: recall_at_5 value: 35.769 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.856 - type: map_at_10 value: 22.651 - type: map_at_100 value: 23.649 - type: map_at_1000 value: 23.783 - type: map_at_3 value: 20.591 - type: map_at_5 value: 21.684 - type: mrr_at_1 value: 19.408 - type: mrr_at_10 value: 26.51 - type: mrr_at_100 value: 27.356 - type: mrr_at_1000 value: 27.439999999999998 - type: mrr_at_3 value: 24.547 - type: mrr_at_5 value: 25.562 - type: ndcg_at_1 value: 19.408 - type: ndcg_at_10 value: 27.072000000000003 - type: ndcg_at_100 value: 31.980999999999998 - type: ndcg_at_1000 value: 35.167 - type: ndcg_at_3 value: 23.338 - type: ndcg_at_5 value: 24.94 - type: precision_at_1 value: 19.408 - type: precision_at_10 value: 4.9590000000000005 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.138 - type: precision_at_5 value: 7.949000000000001 - type: recall_at_1 value: 15.856 - type: recall_at_10 value: 36.578 - type: recall_at_100 value: 58.89 - type: recall_at_1000 value: 81.743 - type: recall_at_3 value: 25.94 - type: recall_at_5 value: 30.153999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.892 - type: map_at_10 value: 33.899 - type: map_at_100 value: 34.955000000000005 - type: map_at_1000 value: 35.066 - type: map_at_3 value: 31.41 - type: map_at_5 value: 32.669 - type: mrr_at_1 value: 30.224 - type: mrr_at_10 value: 37.936 - type: mrr_at_100 value: 38.777 - type: mrr_at_1000 value: 38.85 - type: mrr_at_3 value: 35.821 - type: mrr_at_5 value: 36.894 - type: ndcg_at_1 value: 30.224 - type: ndcg_at_10 value: 38.766 - type: ndcg_at_100 value: 43.806 - type: ndcg_at_1000 value: 46.373999999999995 - type: ndcg_at_3 value: 34.325 - type: ndcg_at_5 value: 36.096000000000004 - type: precision_at_1 value: 30.224 - type: precision_at_10 value: 6.446000000000001 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 15.392 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 25.892 - type: recall_at_10 value: 49.573 - type: recall_at_100 value: 71.885 - type: recall_at_1000 value: 89.912 - type: recall_at_3 value: 37.226 - type: recall_at_5 value: 41.74 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.915 - type: map_at_10 value: 33.613 - type: map_at_100 value: 35.333999999999996 - type: map_at_1000 value: 35.563 - type: map_at_3 value: 31.203999999999997 - type: map_at_5 value: 32.479 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 38.440000000000005 - type: mrr_at_100 value: 39.459 - type: mrr_at_1000 value: 39.513999999999996 - type: mrr_at_3 value: 36.495 - type: mrr_at_5 value: 37.592 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.341 - type: ndcg_at_100 value: 45.382 - type: ndcg_at_1000 value: 47.921 - type: ndcg_at_3 value: 35.671 - type: ndcg_at_5 value: 37.299 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.567 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 17.194000000000003 - type: precision_at_5 value: 12.253 - type: recall_at_1 value: 23.915 - type: recall_at_10 value: 49.491 - type: recall_at_100 value: 76.483 - type: recall_at_1000 value: 92.674 - type: recall_at_3 value: 38.878 - type: recall_at_5 value: 43.492 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.283 - type: map_at_10 value: 26.851000000000003 - type: map_at_100 value: 27.973 - type: map_at_1000 value: 28.087 - type: map_at_3 value: 24.887 - type: map_at_5 value: 25.804 - type: mrr_at_1 value: 22.366 - type: mrr_at_10 value: 28.864 - type: mrr_at_100 value: 29.903000000000002 - type: mrr_at_1000 value: 29.988 - type: mrr_at_3 value: 27.017999999999997 - type: mrr_at_5 value: 27.813 - type: ndcg_at_1 value: 22.366 - type: ndcg_at_10 value: 30.898999999999997 - type: ndcg_at_100 value: 36.176 - type: ndcg_at_1000 value: 39.036 - type: ndcg_at_3 value: 26.932000000000002 - type: ndcg_at_5 value: 28.416999999999998 - type: precision_at_1 value: 22.366 - type: precision_at_10 value: 4.824 - type: precision_at_100 value: 0.804 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 20.283 - type: recall_at_10 value: 41.559000000000005 - type: recall_at_100 value: 65.051 - type: recall_at_1000 value: 86.47200000000001 - type: recall_at_3 value: 30.524 - type: recall_at_5 value: 34.11 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 11.326 - type: map_at_10 value: 19.357 - type: map_at_100 value: 21.014 - type: map_at_1000 value: 21.188000000000002 - type: map_at_3 value: 16.305 - type: map_at_5 value: 17.886 - type: mrr_at_1 value: 24.820999999999998 - type: mrr_at_10 value: 36.150999999999996 - type: mrr_at_100 value: 37.080999999999996 - type: mrr_at_1000 value: 37.123 - type: mrr_at_3 value: 32.952999999999996 - type: mrr_at_5 value: 34.917 - type: ndcg_at_1 value: 24.820999999999998 - type: ndcg_at_10 value: 27.131 - type: ndcg_at_100 value: 33.841 - type: ndcg_at_1000 value: 37.159 - type: ndcg_at_3 value: 22.311 - type: ndcg_at_5 value: 24.026 - type: precision_at_1 value: 24.820999999999998 - type: precision_at_10 value: 8.450000000000001 - type: precision_at_100 value: 1.557 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 16.612 - type: precision_at_5 value: 12.808 - type: recall_at_1 value: 11.326 - type: recall_at_10 value: 32.548 - type: recall_at_100 value: 55.803000000000004 - type: recall_at_1000 value: 74.636 - type: recall_at_3 value: 20.549 - type: recall_at_5 value: 25.514 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.481 - type: map_at_10 value: 15.043999999999999 - type: map_at_100 value: 20.194000000000003 - type: map_at_1000 value: 21.423000000000002 - type: map_at_3 value: 11.238 - type: map_at_5 value: 12.828999999999999 - type: mrr_at_1 value: 54.50000000000001 - type: mrr_at_10 value: 64.713 - type: mrr_at_100 value: 65.216 - type: mrr_at_1000 value: 65.23 - type: mrr_at_3 value: 62.74999999999999 - type: mrr_at_5 value: 63.87500000000001 - type: ndcg_at_1 value: 43.375 - type: ndcg_at_10 value: 32.631 - type: ndcg_at_100 value: 36.338 - type: ndcg_at_1000 value: 43.541000000000004 - type: ndcg_at_3 value: 36.746 - type: ndcg_at_5 value: 34.419 - type: precision_at_1 value: 54.50000000000001 - type: precision_at_10 value: 24.825 - type: precision_at_100 value: 7.698 - type: precision_at_1000 value: 1.657 - type: precision_at_3 value: 38.917 - type: precision_at_5 value: 32.35 - type: recall_at_1 value: 7.481 - type: recall_at_10 value: 20.341 - type: recall_at_100 value: 41.778 - type: recall_at_1000 value: 64.82 - type: recall_at_3 value: 12.748000000000001 - type: recall_at_5 value: 15.507000000000001 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.580000000000005 - type: f1 value: 41.5149462395095 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 61.683 - type: map_at_10 value: 73.071 - type: map_at_100 value: 73.327 - type: map_at_1000 value: 73.341 - type: map_at_3 value: 71.446 - type: map_at_5 value: 72.557 - type: mrr_at_1 value: 66.44200000000001 - type: mrr_at_10 value: 77.725 - type: mrr_at_100 value: 77.89399999999999 - type: mrr_at_1000 value: 77.898 - type: mrr_at_3 value: 76.283 - type: mrr_at_5 value: 77.29700000000001 - type: ndcg_at_1 value: 66.44200000000001 - type: ndcg_at_10 value: 78.43 - type: ndcg_at_100 value: 79.462 - type: ndcg_at_1000 value: 79.754 - type: ndcg_at_3 value: 75.53800000000001 - type: ndcg_at_5 value: 77.332 - type: precision_at_1 value: 66.44200000000001 - type: precision_at_10 value: 9.878 - type: precision_at_100 value: 1.051 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 29.878 - type: precision_at_5 value: 18.953 - type: recall_at_1 value: 61.683 - type: recall_at_10 value: 90.259 - type: recall_at_100 value: 94.633 - type: recall_at_1000 value: 96.60499999999999 - type: recall_at_3 value: 82.502 - type: recall_at_5 value: 86.978 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 17.724 - type: map_at_10 value: 29.487999999999996 - type: map_at_100 value: 31.243 - type: map_at_1000 value: 31.419999999999998 - type: map_at_3 value: 25.612000000000002 - type: map_at_5 value: 27.859 - type: mrr_at_1 value: 35.802 - type: mrr_at_10 value: 44.684000000000005 - type: mrr_at_100 value: 45.578 - type: mrr_at_1000 value: 45.621 - type: mrr_at_3 value: 42.361 - type: mrr_at_5 value: 43.85 - type: ndcg_at_1 value: 35.802 - type: ndcg_at_10 value: 37.009 - type: ndcg_at_100 value: 43.903 - type: ndcg_at_1000 value: 47.019 - type: ndcg_at_3 value: 33.634 - type: ndcg_at_5 value: 34.965 - type: precision_at_1 value: 35.802 - type: precision_at_10 value: 10.386 - type: precision_at_100 value: 1.7309999999999999 - type: precision_at_1000 value: 0.231 - type: precision_at_3 value: 22.84 - type: precision_at_5 value: 17.037 - type: recall_at_1 value: 17.724 - type: recall_at_10 value: 43.708000000000006 - type: recall_at_100 value: 69.902 - type: recall_at_1000 value: 88.51 - type: recall_at_3 value: 30.740000000000002 - type: recall_at_5 value: 36.742000000000004 - task: type: Clustering dataset: type: jinaai/flores_clustering name: MTEB FloresClusteringS2S config: default split: test revision: 480b580487f53a46f881354a8348335d4edbb2de metrics: - type: v_measure value: 39.79120149869612 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 34.801 - type: map_at_10 value: 50.42100000000001 - type: map_at_100 value: 51.254 - type: map_at_1000 value: 51.327999999999996 - type: map_at_3 value: 47.56 - type: map_at_5 value: 49.379 - type: mrr_at_1 value: 69.602 - type: mrr_at_10 value: 76.385 - type: mrr_at_100 value: 76.668 - type: mrr_at_1000 value: 76.683 - type: mrr_at_3 value: 75.102 - type: mrr_at_5 value: 75.949 - type: ndcg_at_1 value: 69.602 - type: ndcg_at_10 value: 59.476 - type: ndcg_at_100 value: 62.527 - type: ndcg_at_1000 value: 64.043 - type: ndcg_at_3 value: 55.155 - type: ndcg_at_5 value: 57.623000000000005 - type: precision_at_1 value: 69.602 - type: precision_at_10 value: 12.292 - type: precision_at_100 value: 1.467 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 34.634 - type: precision_at_5 value: 22.728 - type: recall_at_1 value: 34.801 - type: recall_at_10 value: 61.458 - type: recall_at_100 value: 73.363 - type: recall_at_1000 value: 83.43 - type: recall_at_3 value: 51.951 - type: recall_at_5 value: 56.82000000000001 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 67.46079999999999 - type: ap value: 61.81278199159353 - type: f1 value: 67.26505019954826 - task: type: Reranking dataset: type: jinaai/miracl name: MTEB MIRACL config: default split: test revision: d28a029f35c4ff7f616df47b0edf54e6882395e6 metrics: - type: map value: 73.90464144118539 - type: mrr value: 82.44674693216022 - task: type: Retrieval dataset: type: jinaai/miracl name: MTEB MIRACLRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.299 - type: map_at_10 value: 70.547 - type: map_at_100 value: 72.394 - type: map_at_1000 value: 72.39999999999999 - type: map_at_3 value: 41.317 - type: map_at_5 value: 53.756 - type: mrr_at_1 value: 72.84 - type: mrr_at_10 value: 82.466 - type: mrr_at_100 value: 82.52199999999999 - type: mrr_at_1000 value: 82.52199999999999 - type: mrr_at_3 value: 80.607 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.994 - type: ndcg_at_10 value: 80.89 - type: ndcg_at_100 value: 83.30199999999999 - type: ndcg_at_1000 value: 83.337 - type: ndcg_at_3 value: 70.357 - type: ndcg_at_5 value: 72.529 - type: precision_at_1 value: 72.994 - type: precision_at_10 value: 43.056 - type: precision_at_100 value: 4.603 - type: precision_at_1000 value: 0.461 - type: precision_at_3 value: 61.626000000000005 - type: precision_at_5 value: 55.525000000000006 - type: recall_at_1 value: 21.299 - type: recall_at_10 value: 93.903 - type: recall_at_100 value: 99.86699999999999 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 46.653 - type: recall_at_5 value: 65.72200000000001 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.37163702690378 - type: f1 value: 90.18615216514222 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.88992661774515 - type: f1 value: 89.3738963046966 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.97218422252622 - type: f1 value: 54.03096570916335 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.75917278185457 - type: f1 value: 49.144083814705844 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.75991930060525 - type: f1 value: 69.37993796176502 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.93006052454606 - type: f1 value: 66.04029135274683 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.81977135171486 - type: f1 value: 74.10477122507747 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.23402824478816 - type: f1 value: 71.75572665880296 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.189750849969215 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.78357393555938 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.605612998328358 - type: mrr value: 31.595529205695833 - task: type: Retrieval dataset: type: jinaai/mintakaqa name: MTEB MintakaESRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.213 - type: map_at_10 value: 24.079 - type: map_at_100 value: 25.039 - type: map_at_1000 value: 25.142999999999997 - type: map_at_3 value: 21.823 - type: map_at_5 value: 23.069 - type: mrr_at_1 value: 16.213 - type: mrr_at_10 value: 24.079 - type: mrr_at_100 value: 25.039 - type: mrr_at_1000 value: 25.142999999999997 - type: mrr_at_3 value: 21.823 - type: mrr_at_5 value: 23.069 - type: ndcg_at_1 value: 16.213 - type: ndcg_at_10 value: 28.315 - type: ndcg_at_100 value: 33.475 - type: ndcg_at_1000 value: 36.838 - type: ndcg_at_3 value: 23.627000000000002 - type: ndcg_at_5 value: 25.879 - type: precision_at_1 value: 16.213 - type: precision_at_10 value: 4.183 - type: precision_at_100 value: 0.6709999999999999 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 9.612 - type: precision_at_5 value: 6.865 - type: recall_at_1 value: 16.213 - type: recall_at_10 value: 41.832 - type: recall_at_100 value: 67.12 - type: recall_at_1000 value: 94.843 - type: recall_at_3 value: 28.837000000000003 - type: recall_at_5 value: 34.323 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.692 - type: map_at_10 value: 10.783 - type: map_at_100 value: 13.447999999999999 - type: map_at_1000 value: 14.756 - type: map_at_3 value: 7.646 - type: map_at_5 value: 9.311 - type: mrr_at_1 value: 42.415000000000006 - type: mrr_at_10 value: 50.471 - type: mrr_at_100 value: 51.251999999999995 - type: mrr_at_1000 value: 51.292 - type: mrr_at_3 value: 48.4 - type: mrr_at_5 value: 49.809 - type: ndcg_at_1 value: 40.867 - type: ndcg_at_10 value: 30.303 - type: ndcg_at_100 value: 27.915 - type: ndcg_at_1000 value: 36.734 - type: ndcg_at_3 value: 35.74 - type: ndcg_at_5 value: 33.938 - type: precision_at_1 value: 42.415000000000006 - type: precision_at_10 value: 22.105 - type: precision_at_100 value: 7.173 - type: precision_at_1000 value: 2.007 - type: precision_at_3 value: 33.437 - type: precision_at_5 value: 29.349999999999998 - type: recall_at_1 value: 4.692 - type: recall_at_10 value: 14.798 - type: recall_at_100 value: 28.948 - type: recall_at_1000 value: 59.939 - type: recall_at_3 value: 8.562 - type: recall_at_5 value: 11.818 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 27.572999999999997 - type: map_at_10 value: 42.754 - type: map_at_100 value: 43.8 - type: map_at_1000 value: 43.838 - type: map_at_3 value: 38.157000000000004 - type: map_at_5 value: 40.9 - type: mrr_at_1 value: 31.373 - type: mrr_at_10 value: 45.321 - type: mrr_at_100 value: 46.109 - type: mrr_at_1000 value: 46.135 - type: mrr_at_3 value: 41.483 - type: mrr_at_5 value: 43.76 - type: ndcg_at_1 value: 31.373 - type: ndcg_at_10 value: 50.7 - type: ndcg_at_100 value: 55.103 - type: ndcg_at_1000 value: 55.955999999999996 - type: ndcg_at_3 value: 42.069 - type: ndcg_at_5 value: 46.595 - type: precision_at_1 value: 31.373 - type: precision_at_10 value: 8.601 - type: precision_at_100 value: 1.11 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.399 - type: precision_at_5 value: 14.224 - type: recall_at_1 value: 27.572999999999997 - type: recall_at_10 value: 72.465 - type: recall_at_100 value: 91.474 - type: recall_at_1000 value: 97.78099999999999 - type: recall_at_3 value: 50.087 - type: recall_at_5 value: 60.516000000000005 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.525 - type: map_at_10 value: 84.417 - type: map_at_100 value: 85.07000000000001 - type: map_at_1000 value: 85.085 - type: map_at_3 value: 81.45 - type: map_at_5 value: 83.317 - type: mrr_at_1 value: 81.17999999999999 - type: mrr_at_10 value: 87.34100000000001 - type: mrr_at_100 value: 87.461 - type: mrr_at_1000 value: 87.46199999999999 - type: mrr_at_3 value: 86.372 - type: mrr_at_5 value: 87.046 - type: ndcg_at_1 value: 81.17999999999999 - type: ndcg_at_10 value: 88.144 - type: ndcg_at_100 value: 89.424 - type: ndcg_at_1000 value: 89.517 - type: ndcg_at_3 value: 85.282 - type: ndcg_at_5 value: 86.874 - type: precision_at_1 value: 81.17999999999999 - type: precision_at_10 value: 13.385 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.29 - type: precision_at_5 value: 24.546 - type: recall_at_1 value: 70.525 - type: recall_at_10 value: 95.22500000000001 - type: recall_at_100 value: 99.572 - type: recall_at_1000 value: 99.98899999999999 - type: recall_at_3 value: 87.035 - type: recall_at_5 value: 91.526 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.284384328108736 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.02508021518392 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.046 - type: map_at_100 value: 11.802999999999999 - type: map_at_1000 value: 12.074 - type: map_at_3 value: 7.071 - type: map_at_5 value: 8.556 - type: mrr_at_1 value: 19.8 - type: mrr_at_10 value: 30.105999999999998 - type: mrr_at_100 value: 31.16 - type: mrr_at_1000 value: 31.224 - type: mrr_at_3 value: 26.633000000000003 - type: mrr_at_5 value: 28.768 - type: ndcg_at_1 value: 19.8 - type: ndcg_at_10 value: 17.358 - type: ndcg_at_100 value: 24.566 - type: ndcg_at_1000 value: 29.653000000000002 - type: ndcg_at_3 value: 16.052 - type: ndcg_at_5 value: 14.325 - type: precision_at_1 value: 19.8 - type: precision_at_10 value: 9.07 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.318 - type: precision_at_3 value: 14.933 - type: precision_at_5 value: 12.68 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.398 - type: recall_at_100 value: 39.683 - type: recall_at_1000 value: 64.625 - type: recall_at_3 value: 9.113 - type: recall_at_5 value: 12.873000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.90508618312852 - type: cos_sim_spearman value: 83.01323463129205 - type: euclidean_pearson value: 84.35845059002891 - type: euclidean_spearman value: 82.85508559018527 - type: manhattan_pearson value: 84.3682368950498 - type: manhattan_spearman value: 82.8619728517302 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 89.28294535873366 - type: cos_sim_spearman value: 81.61879268131732 - type: euclidean_pearson value: 85.99053604863724 - type: euclidean_spearman value: 80.95176684739084 - type: manhattan_pearson value: 85.98054086663903 - type: manhattan_spearman value: 80.9911070430335 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.15898098455258 - type: cos_sim_spearman value: 86.8247985072307 - type: euclidean_pearson value: 86.25342429918649 - type: euclidean_spearman value: 87.13468603023252 - type: manhattan_pearson value: 86.2006134067688 - type: manhattan_spearman value: 87.06135811996896 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.57403998481877 - type: cos_sim_spearman value: 83.55947075172618 - type: euclidean_pearson value: 84.97097562965358 - type: euclidean_spearman value: 83.6287075601467 - type: manhattan_pearson value: 84.87092197104133 - type: manhattan_spearman value: 83.53783891641335 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.14632780204231 - type: cos_sim_spearman value: 88.74903634923868 - type: euclidean_pearson value: 88.03922995855112 - type: euclidean_spearman value: 88.72852190525855 - type: manhattan_pearson value: 87.9694791024271 - type: manhattan_spearman value: 88.66461452107418 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.75989818558652 - type: cos_sim_spearman value: 86.03107893122942 - type: euclidean_pearson value: 85.21908960133018 - type: euclidean_spearman value: 85.93012720153482 - type: manhattan_pearson value: 85.1969170195502 - type: manhattan_spearman value: 85.8975254197784 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.16803898789955 - type: cos_sim_spearman value: 88.56139047950525 - type: euclidean_pearson value: 88.09685325747859 - type: euclidean_spearman value: 88.0457609458947 - type: manhattan_pearson value: 88.07054413001431 - type: manhattan_spearman value: 88.10784098889314 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-en) config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.7160384474547 - type: cos_sim_spearman value: 86.4899235500562 - type: euclidean_pearson value: 85.90854477703468 - type: euclidean_spearman value: 86.16085009124498 - type: manhattan_pearson value: 85.9249735317884 - type: manhattan_spearman value: 86.25038421339116 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-es) config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.37914622360788 - type: cos_sim_spearman value: 88.24619159322809 - type: euclidean_pearson value: 89.00538382632769 - type: euclidean_spearman value: 88.44675863524736 - type: manhattan_pearson value: 88.97372120683606 - type: manhattan_spearman value: 88.33509324222129 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 66.22181360203069 - type: cos_sim_spearman value: 65.6218291833768 - type: euclidean_pearson value: 67.14543788822508 - type: euclidean_spearman value: 65.21269939987857 - type: manhattan_pearson value: 67.03304607195636 - type: manhattan_spearman value: 65.18885316423805 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es) config: es split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.71694059677084 - type: cos_sim_spearman value: 67.96591844540954 - type: euclidean_pearson value: 65.6964079162296 - type: euclidean_spearman value: 67.53027948900173 - type: manhattan_pearson value: 65.93545097673741 - type: manhattan_spearman value: 67.7261811805062 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-en) config: es-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 75.43544796375058 - type: cos_sim_spearman value: 78.80462701160789 - type: euclidean_pearson value: 76.19135575163138 - type: euclidean_spearman value: 78.4974732597096 - type: manhattan_pearson value: 76.3254742699264 - type: manhattan_spearman value: 78.51884307690416 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.46805293607684 - type: cos_sim_spearman value: 87.83792784689113 - type: euclidean_pearson value: 87.3872143683234 - type: euclidean_spearman value: 87.61611384542778 - type: manhattan_pearson value: 87.38542672601992 - type: manhattan_spearman value: 87.61423971087297 - task: type: STS dataset: type: PlanTL-GOB-ES/sts-es name: MTEB STSES config: default split: test revision: 0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4 metrics: - type: cos_sim_pearson value: 82.55286866116202 - type: cos_sim_spearman value: 80.22150503320272 - type: euclidean_pearson value: 83.27223445187087 - type: euclidean_spearman value: 80.59078590992925 - type: manhattan_pearson value: 83.23095887013197 - type: manhattan_spearman value: 80.87994285189795 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.29717302265792 - type: mrr value: 94.02156304117088 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 49.9 - type: map_at_10 value: 58.626 - type: map_at_100 value: 59.519999999999996 - type: map_at_1000 value: 59.55200000000001 - type: map_at_3 value: 56.232000000000006 - type: map_at_5 value: 57.833 - type: mrr_at_1 value: 52.333 - type: mrr_at_10 value: 60.039 - type: mrr_at_100 value: 60.732 - type: mrr_at_1000 value: 60.75899999999999 - type: mrr_at_3 value: 58.278 - type: mrr_at_5 value: 59.428000000000004 - type: ndcg_at_1 value: 52.333 - type: ndcg_at_10 value: 62.67 - type: ndcg_at_100 value: 66.465 - type: ndcg_at_1000 value: 67.425 - type: ndcg_at_3 value: 58.711999999999996 - type: ndcg_at_5 value: 60.958999999999996 - type: precision_at_1 value: 52.333 - type: precision_at_10 value: 8.333 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 22.778000000000002 - type: precision_at_5 value: 15.267 - type: recall_at_1 value: 49.9 - type: recall_at_10 value: 73.394 - type: recall_at_100 value: 90.43299999999999 - type: recall_at_1000 value: 98.167 - type: recall_at_3 value: 63.032999999999994 - type: recall_at_5 value: 68.444 - task: type: Clustering dataset: type: jinaai/spanish_news_clustering name: MTEB SpanishNewsClusteringP2P config: default split: test revision: b5edc3d3d7c12c7b9f883e9da50f6732f3624142 metrics: - type: v_measure value: 48.30543557796266 - task: type: Retrieval dataset: type: jinaai/spanish_passage_retrieval name: MTEB SpanishPassageRetrievalS2P config: default split: test revision: None metrics: - type: map_at_1 value: 14.443 - type: map_at_10 value: 28.736 - type: map_at_100 value: 34.514 - type: map_at_1000 value: 35.004000000000005 - type: map_at_3 value: 20.308 - type: map_at_5 value: 25.404 - type: mrr_at_1 value: 50.29900000000001 - type: mrr_at_10 value: 63.757 - type: mrr_at_100 value: 64.238 - type: mrr_at_1000 value: 64.24600000000001 - type: mrr_at_3 value: 59.480999999999995 - type: mrr_at_5 value: 62.924 - type: ndcg_at_1 value: 50.29900000000001 - type: ndcg_at_10 value: 42.126999999999995 - type: ndcg_at_100 value: 57.208000000000006 - type: ndcg_at_1000 value: 60.646 - type: ndcg_at_3 value: 38.722 - type: ndcg_at_5 value: 40.007999999999996 - type: precision_at_1 value: 50.29900000000001 - type: precision_at_10 value: 19.82 - type: precision_at_100 value: 4.82 - type: precision_at_1000 value: 0.5910000000000001 - type: precision_at_3 value: 31.537 - type: precision_at_5 value: 28.262999999999998 - type: recall_at_1 value: 14.443 - type: recall_at_10 value: 43.885999999999996 - type: recall_at_100 value: 85.231 - type: recall_at_1000 value: 99.07000000000001 - type: recall_at_3 value: 22.486 - type: recall_at_5 value: 33.035 - task: type: Retrieval dataset: type: jinaai/spanish_passage_retrieval name: MTEB SpanishPassageRetrievalS2S config: default split: test revision: None metrics: - type: map_at_1 value: 15.578 - type: map_at_10 value: 52.214000000000006 - type: map_at_100 value: 64.791 - type: map_at_1000 value: 64.791 - type: map_at_3 value: 33.396 - type: map_at_5 value: 41.728 - type: mrr_at_1 value: 73.653 - type: mrr_at_10 value: 85.116 - type: mrr_at_100 value: 85.205 - type: mrr_at_1000 value: 85.205 - type: mrr_at_3 value: 84.631 - type: mrr_at_5 value: 85.05 - type: ndcg_at_1 value: 76.64699999999999 - type: ndcg_at_10 value: 70.38600000000001 - type: ndcg_at_100 value: 82.27600000000001 - type: ndcg_at_1000 value: 82.27600000000001 - type: ndcg_at_3 value: 70.422 - type: ndcg_at_5 value: 69.545 - type: precision_at_1 value: 76.64699999999999 - type: precision_at_10 value: 43.653 - type: precision_at_100 value: 7.718999999999999 - type: precision_at_1000 value: 0.772 - type: precision_at_3 value: 64.671 - type: precision_at_5 value: 56.766000000000005 - type: recall_at_1 value: 15.578 - type: recall_at_10 value: 67.459 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 36.922 - type: recall_at_5 value: 49.424 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81683168316832 - type: cos_sim_ap value: 95.61502659412484 - type: cos_sim_f1 value: 90.6813627254509 - type: cos_sim_precision value: 90.86345381526104 - type: cos_sim_recall value: 90.5 - type: dot_accuracy value: 99.8039603960396 - type: dot_ap value: 95.36783483182609 - type: dot_f1 value: 89.90825688073394 - type: dot_precision value: 91.68399168399168 - type: dot_recall value: 88.2 - type: euclidean_accuracy value: 99.81188118811882 - type: euclidean_ap value: 95.51583052324564 - type: euclidean_f1 value: 90.46214355948868 - type: euclidean_precision value: 88.97485493230174 - type: euclidean_recall value: 92.0 - type: manhattan_accuracy value: 99.8079207920792 - type: manhattan_ap value: 95.44030644653718 - type: manhattan_f1 value: 90.37698412698413 - type: manhattan_precision value: 89.66535433070865 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.81683168316832 - type: max_ap value: 95.61502659412484 - type: max_f1 value: 90.6813627254509 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.39046705023096 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.57429225651293 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.17622570658746 - type: mrr value: 50.99844293778118 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.97416289382191 - type: cos_sim_spearman value: 29.871890597161432 - type: dot_pearson value: 28.768845892613644 - type: dot_spearman value: 28.872458999448686 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.646 - type: map_at_100 value: 9.491 - type: map_at_1000 value: 23.75 - type: map_at_3 value: 0.588 - type: map_at_5 value: 0.9129999999999999 - type: mrr_at_1 value: 84.0 - type: mrr_at_10 value: 89.889 - type: mrr_at_100 value: 89.889 - type: mrr_at_1000 value: 89.889 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 75.0 - type: ndcg_at_10 value: 67.368 - type: ndcg_at_100 value: 52.834 - type: ndcg_at_1000 value: 49.144 - type: ndcg_at_3 value: 72.866 - type: ndcg_at_5 value: 70.16 - type: precision_at_1 value: 84.0 - type: precision_at_10 value: 71.8 - type: precision_at_100 value: 54.04 - type: precision_at_1000 value: 21.709999999999997 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 1.9029999999999998 - type: recall_at_100 value: 13.012 - type: recall_at_1000 value: 46.105000000000004 - type: recall_at_3 value: 0.63 - type: recall_at_5 value: 1.0030000000000001 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.5 - type: map_at_10 value: 8.193999999999999 - type: map_at_100 value: 14.01 - type: map_at_1000 value: 15.570999999999998 - type: map_at_3 value: 4.361000000000001 - type: map_at_5 value: 5.9270000000000005 - type: mrr_at_1 value: 16.326999999999998 - type: mrr_at_10 value: 33.326 - type: mrr_at_100 value: 34.592 - type: mrr_at_1000 value: 34.592 - type: mrr_at_3 value: 29.252 - type: mrr_at_5 value: 30.680000000000003 - type: ndcg_at_1 value: 15.306000000000001 - type: ndcg_at_10 value: 19.819 - type: ndcg_at_100 value: 33.428000000000004 - type: ndcg_at_1000 value: 45.024 - type: ndcg_at_3 value: 19.667 - type: ndcg_at_5 value: 19.625 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_10 value: 18.367 - type: precision_at_100 value: 7.367 - type: precision_at_1000 value: 1.496 - type: precision_at_3 value: 23.128999999999998 - type: precision_at_5 value: 21.633 - type: recall_at_1 value: 1.5 - type: recall_at_10 value: 14.362 - type: recall_at_100 value: 45.842 - type: recall_at_1000 value: 80.42 - type: recall_at_3 value: 5.99 - type: recall_at_5 value: 8.701 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.04740000000001 - type: ap value: 13.58661943759992 - type: f1 value: 53.727487131754195 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.06395019807584 - type: f1 value: 61.36753664680866 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.19881263066229 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.19401561661799 - type: cos_sim_ap value: 71.62462506173092 - type: cos_sim_f1 value: 66.0641327225455 - type: cos_sim_precision value: 62.234662934453 - type: cos_sim_recall value: 70.3957783641161 - type: dot_accuracy value: 84.69333015437802 - type: dot_ap value: 69.83805526490895 - type: dot_f1 value: 64.85446235265817 - type: dot_precision value: 59.59328028293546 - type: dot_recall value: 71.13456464379946 - type: euclidean_accuracy value: 85.38475293556655 - type: euclidean_ap value: 72.05594596250286 - type: euclidean_f1 value: 66.53543307086615 - type: euclidean_precision value: 62.332872291378514 - type: euclidean_recall value: 71.34564643799473 - type: manhattan_accuracy value: 85.3907134767837 - type: manhattan_ap value: 72.04585410650152 - type: manhattan_f1 value: 66.57132642116554 - type: manhattan_precision value: 60.704194740273856 - type: manhattan_recall value: 73.6939313984169 - type: max_accuracy value: 85.3907134767837 - type: max_ap value: 72.05594596250286 - type: max_f1 value: 66.57132642116554 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.30414871735165 - type: cos_sim_ap value: 86.4398673359918 - type: cos_sim_f1 value: 78.9243598692186 - type: cos_sim_precision value: 75.47249350101876 - type: cos_sim_recall value: 82.7071142593163 - type: dot_accuracy value: 89.26145845461248 - type: dot_ap value: 86.32172118414802 - type: dot_f1 value: 78.8277467755645 - type: dot_precision value: 75.79418662497335 - type: dot_recall value: 82.11425931629196 - type: euclidean_accuracy value: 89.24205378973105 - type: euclidean_ap value: 86.23988673522649 - type: euclidean_f1 value: 78.67984857951413 - type: euclidean_precision value: 75.2689684269742 - type: euclidean_recall value: 82.41453649522637 - type: manhattan_accuracy value: 89.18189932859859 - type: manhattan_ap value: 86.21003833972824 - type: manhattan_f1 value: 78.70972564850115 - type: manhattan_precision value: 76.485544094145 - type: manhattan_recall value: 81.0671388974438 - type: max_accuracy value: 89.30414871735165 - type: max_ap value: 86.4398673359918 - type: max_f1 value: 78.9243598692186 - task: type: Clustering dataset: type: jinaai/cities_wiki_clustering name: MTEB WikiCitiesClustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 73.254610626148 - task: type: Retrieval dataset: type: jinaai/xmarket_ml name: MTEB XMarketES config: default split: test revision: 705db869e8107dfe6e34b832af90446e77d813e3 metrics: - type: map_at_1 value: 5.506 - type: map_at_10 value: 11.546 - type: map_at_100 value: 14.299999999999999 - type: map_at_1000 value: 15.146999999999998 - type: map_at_3 value: 8.748000000000001 - type: map_at_5 value: 10.036000000000001 - type: mrr_at_1 value: 17.902 - type: mrr_at_10 value: 25.698999999999998 - type: mrr_at_100 value: 26.634 - type: mrr_at_1000 value: 26.704 - type: mrr_at_3 value: 23.244999999999997 - type: mrr_at_5 value: 24.555 - type: ndcg_at_1 value: 17.902 - type: ndcg_at_10 value: 19.714000000000002 - type: ndcg_at_100 value: 25.363000000000003 - type: ndcg_at_1000 value: 30.903999999999996 - type: ndcg_at_3 value: 17.884 - type: ndcg_at_5 value: 18.462 - type: precision_at_1 value: 17.902 - type: precision_at_10 value: 10.467 - type: precision_at_100 value: 3.9699999999999998 - type: precision_at_1000 value: 1.1320000000000001 - type: precision_at_3 value: 14.387 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 5.506 - type: recall_at_10 value: 19.997999999999998 - type: recall_at_100 value: 42.947 - type: recall_at_1000 value: 67.333 - type: recall_at_3 value: 11.158 - type: recall_at_5 value: 14.577000000000002 - task: type: Retrieval dataset: type: jinaai/xpqa name: MTEB XPQAESRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.53 - type: map_at_10 value: 58.68600000000001 - type: map_at_100 value: 60.45399999999999 - type: map_at_1000 value: 60.51499999999999 - type: map_at_3 value: 50.356 - type: map_at_5 value: 55.98 - type: mrr_at_1 value: 61.791 - type: mrr_at_10 value: 68.952 - type: mrr_at_100 value: 69.524 - type: mrr_at_1000 value: 69.538 - type: mrr_at_3 value: 67.087 - type: mrr_at_5 value: 68.052 - type: ndcg_at_1 value: 61.791 - type: ndcg_at_10 value: 65.359 - type: ndcg_at_100 value: 70.95700000000001 - type: ndcg_at_1000 value: 71.881 - type: ndcg_at_3 value: 59.999 - type: ndcg_at_5 value: 61.316 - type: precision_at_1 value: 61.791 - type: precision_at_10 value: 18.184 - type: precision_at_100 value: 2.317 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 42.203 - type: precision_at_5 value: 31.374999999999996 - type: recall_at_1 value: 32.53 - type: recall_at_10 value: 73.098 - type: recall_at_100 value: 94.029 - type: recall_at_1000 value: 99.842 - type: recall_at_3 value: 54.525 - type: recall_at_5 value: 63.796 --- <!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-es` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-es` is a Spanish/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed Spanish-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-es` es un modelo (embedding) de texto bilingüe Inglés/Español que admite una longitud de secuencia de 8192. Se basa en la arquitectura BERT (JinaBERT) que incorpora la variante bi-direccional simétrica de [ALiBi](https://arxiv.org/abs/2108.12409) para permitir una mayor longitud de secuencia. Hemos diseñado este modelo para un alto rendimiento en aplicaciones monolingües y bilingües, y está entrenando específicamente para admitir entradas mixtas de español e inglés sin sesgo. Adicionalmente, proporcionamos los siguientes modelos (embeddings): - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings **(you are here)**. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016) ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean pooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-es') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from the `transformers` package: ```python !pip install transformers from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-es', trust_remote_code=True) # trust_remote_code is needed to use the encode method embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Or you can use the model with the `sentence-transformers` package: ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("jinaai/jina-embeddings-v2-base-es", trust_remote_code=True) embeddings = model.encode(['How is the weather today?', '¿Qué tiempo hace hoy?']) print(util.cos_sim(embeddings[0], embeddings[1])) ``` And if you only want to handle shorter sequence, such as 2k, then you can set the `model.max_seq_length` ```python model.max_seq_length = 2048 ``` ## Alternatives to Transformers and Sentence Transformers 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Plans 1. Bilingual embedding models supporting more European & Asian languages, including French, Italian and Japanese. 2. Multimodal embedding models enable Multimodal RAG applications. 3. High-performt rerankers. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
QuantFactory/Llama-3-Taiwan-8B-Instruct-DPO-GGUF
QuantFactory
"2024-07-02T05:52:20Z"
9,611
0
null
[ "gguf", "region:us" ]
null
"2024-07-02T05:01:12Z"
Entry not found
TheBloke/Mixtral-8x7B-v0.1-GGUF
TheBloke
"2023-12-14T14:30:53Z"
9,596
416
transformers
[ "transformers", "gguf", "mixtral", "fr", "it", "de", "es", "en", "base_model:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2023-12-11T13:23:32Z"
--- base_model: mistralai/Mixtral-8x7B-v0.1 inference: false language: - fr - it - de - es - en license: apache-2.0 model_creator: Mistral AI_ model_name: Mixtral 8X7B v0.1 model_type: mixtral prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 8X7B v0.1 - GGUF - Model creator: [Mistral AI_](https://huggingface.co/mistralai) - Original model: [Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ### Mixtral GGUF Support for Mixtral was merged into Llama.cpp on December 13th. These Mixtral GGUFs are known to work in: * llama.cpp as of December 13th * KoboldCpp 1.52 as later * LM Studio 0.2.9 and later * llama-cpp-python 0.2.23 and later Other clients/libraries, not listed above, may not yet work. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mixtral-8x7b-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF) * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mixtral-8x7b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes | | [mixtral-8x7b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss | | [mixtral-8x7b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mixtral-8x7b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended | | [mixtral-8x7b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mixtral-8x7b-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended | | [mixtral-8x7b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [mixtral-8x7b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-v0.1-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GGUF mixtral-8x7b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GGUF mixtral-8x7b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mixtral-8x7b-v0.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first. Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mixtral-8x7b-v0.1.Q4_K_M.gguf", # Download the model file first n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mixtral-8x7b-v0.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Mistral AI_'s Mixtral 8X7B v0.1 # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Notice Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. <!-- original-model-card end -->
bartowski/NuExtract-GGUF
bartowski
"2024-06-26T04:09:06Z"
9,591
1
null
[ "gguf", "text-generation", "en", "license:mit", "region:us" ]
text-generation
"2024-06-26T03:58:50Z"
--- license: mit language: - en quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of NuExtract Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization. Original model: https://huggingface.co/numind/NuExtract All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <s><|user|> {prompt}<|end|><|assistant|><|end|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [NuExtract-Q8_0_L.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q8_1.gguf) | Q8_0_L | 4.24GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. | | [NuExtract-Q8_0.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. | | [NuExtract-Q6_K_L.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q6_K_L.gguf) | Q6_K_L | 3.36GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. | | [NuExtract-Q6_K.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. | | [NuExtract-Q5_K_L.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q5_K_L.gguf) | Q5_K_L | 3.06GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. | | [NuExtract-Q5_K_M.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. | | [NuExtract-Q5_K_S.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. | | [NuExtract-Q4_K_L.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q4_K_L.gguf) | Q4_K_L | 2.65GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. | | [NuExtract-Q4_K_M.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [NuExtract-Q4_K_S.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. | | [NuExtract-IQ4_XS.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [NuExtract-Q3_K_XL.gguf](https://huggingface.co/bartowski/NuExtract-GGUF//main/NuExtract-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. | | [NuExtract-Q3_K_L.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. | | [NuExtract-Q3_K_M.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. | | [NuExtract-IQ3_M.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [NuExtract-Q3_K_S.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. | | [NuExtract-IQ3_XS.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [NuExtract-IQ3_XXS.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [NuExtract-Q2_K.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. | | [NuExtract-IQ2_M.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [NuExtract-IQ2_S.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. | | [NuExtract-IQ2_XS.gguf](https://huggingface.co/bartowski/NuExtract-GGUF/blob/main/NuExtract-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/NuExtract-GGUF --include "NuExtract-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/NuExtract-GGUF --include "NuExtract-Q8_0.gguf/*" --local-dir NuExtract-Q8_0 ``` You can either specify a new local-dir (NuExtract-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mradermacher/L-MChat-7b-GGUF
mradermacher
"2024-06-27T16:47:20Z"
9,589
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "Nexusflow/Starling-LM-7B-beta", "FuseAI/FuseChat-7B-VaRM", "en", "base_model:Artples/L-MChat-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-27T06:26:42Z"
--- base_model: Artples/L-MChat-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - Nexusflow/Starling-LM-7B-beta - FuseAI/FuseChat-7B-VaRM --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Artples/L-MChat-7b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L-MChat-7b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L-MChat-7b-GGUF/resolve/main/L-MChat-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large
nreimers
"2021-06-20T19:03:16Z"
9,587
16
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
# Multilingual MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
mradermacher/Prox-Llama-3-8B-GGUF
mradermacher
"2024-06-21T09:49:49Z"
9,583
0
transformers
[ "transformers", "gguf", "code", "cybersecurity", "penetration testing", "hacking", "uncensored", "unsloth", "en", "base_model:openvoid/Prox-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T16:03:08Z"
--- base_model: openvoid/Prox-Llama-3-8B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - code - cybersecurity - penetration testing - hacking - code - uncensored - unsloth --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/openvoid/Prox-Llama-3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF/resolve/main/Prox-Llama-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/mexa-0.0.2-GGUF
mradermacher
"2024-06-20T23:15:18Z"
9,582
0
transformers
[ "transformers", "gguf", "en", "base_model:SiguienteGlobal/mexa-0.0.2", "endpoints_compatible", "region:us" ]
null
"2024-06-20T14:16:47Z"
--- base_model: SiguienteGlobal/mexa-0.0.2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SiguienteGlobal/mexa-0.0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/mexa-0.0.2-GGUF/resolve/main/mexa-0.0.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k
timm
"2024-02-10T23:42:02Z"
9,579
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1611.05431", "arxiv:1904.11486", "arxiv:1512.03385", "arxiv:1709.01507", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T19:42:33Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for seresnextaa101d_32x8d.sw_in12k_ft_in1k A SE-ResNeXt-D (Rectangle-2 Anti-Aliasing) image classification model with Squeeze-and-Excitation channel attention. This model features: * ReLU activations * 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample * grouped 3x3 bottleneck convolutions * Squeeze-and-Excitation channel attention Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below. Recipe details: * Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes) * AdamW optimizer, gradient clipping, EMA weight averaging * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 93.6 - GMACs: 17.2 - Activations (M): 34.2 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431 - Making Convolutional Networks Shift-Invariant Again: https://arxiv.org/abs/1904.11486 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Squeeze-and-Excitation Networks: https://arxiv.org/abs/1709.01507 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('seresnextaa101d_32x8d.sw_in12k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnextaa101d_32x8d.sw_in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnextaa101d_32x8d.sw_in12k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{Xie2016, title={Aggregated Residual Transformations for Deep Neural Networks}, author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He}, journal={arXiv preprint arXiv:1611.05431}, year={2016} } ``` ```bibtex @inproceedings{zhang2019shiftinvar, title={Making Convolutional Networks Shift-Invariant Again}, author={Zhang, Richard}, booktitle={ICML}, year={2019} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @inproceedings{hu2018senet, title={Squeeze-and-Excitation Networks}, author={Jie Hu and Li Shen and Gang Sun}, journal={IEEE Conference on Computer Vision and Pattern Recognition}, year={2018} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
infgrad/stella-base-en-v2
infgrad
"2024-04-06T02:49:06Z"
9,578
12
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "arxiv:1612.00796", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-10-19T06:14:31Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: stella-base-en-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.19402985074628 - type: ap value: 40.43267503017359 - type: f1 value: 71.15585210518594 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.256675 - type: ap value: 90.00824833079179 - type: f1 value: 93.2473146151734 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.612 - type: f1 value: 48.530785631574304 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 37.411 - type: map_at_10 value: 52.673 - type: map_at_100 value: 53.410999999999994 - type: map_at_1000 value: 53.415 - type: map_at_3 value: 48.495 - type: map_at_5 value: 51.183 - type: mrr_at_1 value: 37.838 - type: mrr_at_10 value: 52.844 - type: mrr_at_100 value: 53.581999999999994 - type: mrr_at_1000 value: 53.586 - type: mrr_at_3 value: 48.672 - type: mrr_at_5 value: 51.272 - type: ndcg_at_1 value: 37.411 - type: ndcg_at_10 value: 60.626999999999995 - type: ndcg_at_100 value: 63.675000000000004 - type: ndcg_at_1000 value: 63.776999999999994 - type: ndcg_at_3 value: 52.148 - type: ndcg_at_5 value: 57.001999999999995 - type: precision_at_1 value: 37.411 - type: precision_at_10 value: 8.578 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.91 - type: precision_at_5 value: 14.908 - type: recall_at_1 value: 37.411 - type: recall_at_10 value: 85.775 - type: recall_at_100 value: 98.86200000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 62.731 - type: recall_at_5 value: 74.53800000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.24219029437865 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.474604844291726 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.720542706366054 - type: mrr value: 75.59633733456448 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.31345008397868 - type: cos_sim_spearman value: 85.94292212320399 - type: euclidean_pearson value: 85.03974302774525 - type: euclidean_spearman value: 85.88087251659051 - type: manhattan_pearson value: 84.91900996712951 - type: manhattan_spearman value: 85.96701905781116 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.29572512364581 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.55532460397536 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.91195973591251 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.822 - type: map_at_10 value: 44.139 - type: map_at_100 value: 45.786 - type: map_at_1000 value: 45.906000000000006 - type: map_at_3 value: 40.637 - type: map_at_5 value: 42.575 - type: mrr_at_1 value: 41.059 - type: mrr_at_10 value: 50.751000000000005 - type: mrr_at_100 value: 51.548 - type: mrr_at_1000 value: 51.583999999999996 - type: mrr_at_3 value: 48.236000000000004 - type: mrr_at_5 value: 49.838 - type: ndcg_at_1 value: 41.059 - type: ndcg_at_10 value: 50.573 - type: ndcg_at_100 value: 56.25 - type: ndcg_at_1000 value: 58.004 - type: ndcg_at_3 value: 45.995000000000005 - type: ndcg_at_5 value: 48.18 - type: precision_at_1 value: 41.059 - type: precision_at_10 value: 9.757 - type: precision_at_100 value: 1.609 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 22.222 - type: precision_at_5 value: 16.023 - type: recall_at_1 value: 32.822 - type: recall_at_10 value: 61.794000000000004 - type: recall_at_100 value: 85.64699999999999 - type: recall_at_1000 value: 96.836 - type: recall_at_3 value: 47.999 - type: recall_at_5 value: 54.376999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.579 - type: map_at_10 value: 39.787 - type: map_at_100 value: 40.976 - type: map_at_1000 value: 41.108 - type: map_at_3 value: 36.819 - type: map_at_5 value: 38.437 - type: mrr_at_1 value: 37.516 - type: mrr_at_10 value: 45.822 - type: mrr_at_100 value: 46.454 - type: mrr_at_1000 value: 46.495999999999995 - type: mrr_at_3 value: 43.556 - type: mrr_at_5 value: 44.814 - type: ndcg_at_1 value: 37.516 - type: ndcg_at_10 value: 45.5 - type: ndcg_at_100 value: 49.707 - type: ndcg_at_1000 value: 51.842 - type: ndcg_at_3 value: 41.369 - type: ndcg_at_5 value: 43.161 - type: precision_at_1 value: 37.516 - type: precision_at_10 value: 8.713 - type: precision_at_100 value: 1.38 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 20.233999999999998 - type: precision_at_5 value: 14.280000000000001 - type: recall_at_1 value: 29.579 - type: recall_at_10 value: 55.458 - type: recall_at_100 value: 73.49799999999999 - type: recall_at_1000 value: 87.08200000000001 - type: recall_at_3 value: 42.858000000000004 - type: recall_at_5 value: 48.215 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 40.489999999999995 - type: map_at_10 value: 53.313 - type: map_at_100 value: 54.290000000000006 - type: map_at_1000 value: 54.346000000000004 - type: map_at_3 value: 49.983 - type: map_at_5 value: 51.867 - type: mrr_at_1 value: 46.27 - type: mrr_at_10 value: 56.660999999999994 - type: mrr_at_100 value: 57.274 - type: mrr_at_1000 value: 57.301 - type: mrr_at_3 value: 54.138 - type: mrr_at_5 value: 55.623999999999995 - type: ndcg_at_1 value: 46.27 - type: ndcg_at_10 value: 59.192 - type: ndcg_at_100 value: 63.026 - type: ndcg_at_1000 value: 64.079 - type: ndcg_at_3 value: 53.656000000000006 - type: ndcg_at_5 value: 56.387 - type: precision_at_1 value: 46.27 - type: precision_at_10 value: 9.511 - type: precision_at_100 value: 1.23 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 24.096 - type: precision_at_5 value: 16.476 - type: recall_at_1 value: 40.489999999999995 - type: recall_at_10 value: 73.148 - type: recall_at_100 value: 89.723 - type: recall_at_1000 value: 97.073 - type: recall_at_3 value: 58.363 - type: recall_at_5 value: 65.083 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.197 - type: map_at_10 value: 35.135 - type: map_at_100 value: 36.14 - type: map_at_1000 value: 36.216 - type: map_at_3 value: 32.358 - type: map_at_5 value: 33.814 - type: mrr_at_1 value: 28.475 - type: mrr_at_10 value: 37.096000000000004 - type: mrr_at_100 value: 38.006 - type: mrr_at_1000 value: 38.06 - type: mrr_at_3 value: 34.52 - type: mrr_at_5 value: 35.994 - type: ndcg_at_1 value: 28.475 - type: ndcg_at_10 value: 40.263 - type: ndcg_at_100 value: 45.327 - type: ndcg_at_1000 value: 47.225 - type: ndcg_at_3 value: 34.882000000000005 - type: ndcg_at_5 value: 37.347 - type: precision_at_1 value: 28.475 - type: precision_at_10 value: 6.249 - type: precision_at_100 value: 0.919 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 14.689 - type: precision_at_5 value: 10.237 - type: recall_at_1 value: 26.197 - type: recall_at_10 value: 54.17999999999999 - type: recall_at_100 value: 77.768 - type: recall_at_1000 value: 91.932 - type: recall_at_3 value: 39.804 - type: recall_at_5 value: 45.660000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.683 - type: map_at_10 value: 25.013999999999996 - type: map_at_100 value: 26.411 - type: map_at_1000 value: 26.531 - type: map_at_3 value: 22.357 - type: map_at_5 value: 23.982999999999997 - type: mrr_at_1 value: 20.896 - type: mrr_at_10 value: 29.758000000000003 - type: mrr_at_100 value: 30.895 - type: mrr_at_1000 value: 30.964999999999996 - type: mrr_at_3 value: 27.177 - type: mrr_at_5 value: 28.799999999999997 - type: ndcg_at_1 value: 20.896 - type: ndcg_at_10 value: 30.294999999999998 - type: ndcg_at_100 value: 36.68 - type: ndcg_at_1000 value: 39.519 - type: ndcg_at_3 value: 25.480999999999998 - type: ndcg_at_5 value: 28.027 - type: precision_at_1 value: 20.896 - type: precision_at_10 value: 5.56 - type: precision_at_100 value: 1.006 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 12.231 - type: precision_at_5 value: 9.104 - type: recall_at_1 value: 16.683 - type: recall_at_10 value: 41.807 - type: recall_at_100 value: 69.219 - type: recall_at_1000 value: 89.178 - type: recall_at_3 value: 28.772 - type: recall_at_5 value: 35.167 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.653000000000002 - type: map_at_10 value: 41.21 - type: map_at_100 value: 42.543 - type: map_at_1000 value: 42.657000000000004 - type: map_at_3 value: 38.094 - type: map_at_5 value: 39.966 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.087 - type: mrr_at_100 value: 47.959 - type: mrr_at_1000 value: 48.003 - type: mrr_at_3 value: 45.043 - type: mrr_at_5 value: 46.352 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.158 - type: ndcg_at_100 value: 52.65 - type: ndcg_at_1000 value: 54.644999999999996 - type: ndcg_at_3 value: 42.632999999999996 - type: ndcg_at_5 value: 44.994 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.498999999999999 - type: precision_at_100 value: 1.308 - type: precision_at_1000 value: 0.166 - type: precision_at_3 value: 20.308 - type: precision_at_5 value: 14.283000000000001 - type: recall_at_1 value: 30.653000000000002 - type: recall_at_10 value: 58.826 - type: recall_at_100 value: 81.94 - type: recall_at_1000 value: 94.71000000000001 - type: recall_at_3 value: 45.965 - type: recall_at_5 value: 52.294 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.71 - type: map_at_10 value: 36.001 - type: map_at_100 value: 37.416 - type: map_at_1000 value: 37.522 - type: map_at_3 value: 32.841 - type: map_at_5 value: 34.515 - type: mrr_at_1 value: 32.647999999999996 - type: mrr_at_10 value: 41.43 - type: mrr_at_100 value: 42.433 - type: mrr_at_1000 value: 42.482 - type: mrr_at_3 value: 39.117000000000004 - type: mrr_at_5 value: 40.35 - type: ndcg_at_1 value: 32.647999999999996 - type: ndcg_at_10 value: 41.629 - type: ndcg_at_100 value: 47.707 - type: ndcg_at_1000 value: 49.913000000000004 - type: ndcg_at_3 value: 36.598000000000006 - type: ndcg_at_5 value: 38.696000000000005 - type: precision_at_1 value: 32.647999999999996 - type: precision_at_10 value: 7.704999999999999 - type: precision_at_100 value: 1.242 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 17.314 - type: precision_at_5 value: 12.374 - type: recall_at_1 value: 26.71 - type: recall_at_10 value: 52.898 - type: recall_at_100 value: 79.08 - type: recall_at_1000 value: 93.94 - type: recall_at_3 value: 38.731 - type: recall_at_5 value: 44.433 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.510999999999996 - type: map_at_10 value: 35.755333333333326 - type: map_at_100 value: 36.97525 - type: map_at_1000 value: 37.08741666666667 - type: map_at_3 value: 32.921 - type: map_at_5 value: 34.45041666666667 - type: mrr_at_1 value: 31.578416666666666 - type: mrr_at_10 value: 40.06066666666667 - type: mrr_at_100 value: 40.93350000000001 - type: mrr_at_1000 value: 40.98716666666667 - type: mrr_at_3 value: 37.710499999999996 - type: mrr_at_5 value: 39.033249999999995 - type: ndcg_at_1 value: 31.578416666666666 - type: ndcg_at_10 value: 41.138666666666666 - type: ndcg_at_100 value: 46.37291666666666 - type: ndcg_at_1000 value: 48.587500000000006 - type: ndcg_at_3 value: 36.397083333333335 - type: ndcg_at_5 value: 38.539 - type: precision_at_1 value: 31.578416666666666 - type: precision_at_10 value: 7.221583333333332 - type: precision_at_100 value: 1.1581666666666668 - type: precision_at_1000 value: 0.15416666666666667 - type: precision_at_3 value: 16.758 - type: precision_at_5 value: 11.830916666666665 - type: recall_at_1 value: 26.510999999999996 - type: recall_at_10 value: 52.7825 - type: recall_at_100 value: 75.79675 - type: recall_at_1000 value: 91.10483333333335 - type: recall_at_3 value: 39.48233333333334 - type: recall_at_5 value: 45.07116666666667 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.564 - type: map_at_10 value: 31.235000000000003 - type: map_at_100 value: 32.124 - type: map_at_1000 value: 32.216 - type: map_at_3 value: 29.330000000000002 - type: map_at_5 value: 30.379 - type: mrr_at_1 value: 27.761000000000003 - type: mrr_at_10 value: 34.093 - type: mrr_at_100 value: 34.885 - type: mrr_at_1000 value: 34.957 - type: mrr_at_3 value: 32.388 - type: mrr_at_5 value: 33.269 - type: ndcg_at_1 value: 27.761000000000003 - type: ndcg_at_10 value: 35.146 - type: ndcg_at_100 value: 39.597 - type: ndcg_at_1000 value: 42.163000000000004 - type: ndcg_at_3 value: 31.674000000000003 - type: ndcg_at_5 value: 33.224 - type: precision_at_1 value: 27.761000000000003 - type: precision_at_10 value: 5.383 - type: precision_at_100 value: 0.836 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 13.599 - type: precision_at_5 value: 9.202 - type: recall_at_1 value: 24.564 - type: recall_at_10 value: 44.36 - type: recall_at_100 value: 64.408 - type: recall_at_1000 value: 83.892 - type: recall_at_3 value: 34.653 - type: recall_at_5 value: 38.589 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.01 - type: map_at_10 value: 24.485 - type: map_at_100 value: 25.573 - type: map_at_1000 value: 25.703 - type: map_at_3 value: 21.953 - type: map_at_5 value: 23.294999999999998 - type: mrr_at_1 value: 20.544 - type: mrr_at_10 value: 28.238000000000003 - type: mrr_at_100 value: 29.142000000000003 - type: mrr_at_1000 value: 29.219 - type: mrr_at_3 value: 25.802999999999997 - type: mrr_at_5 value: 27.105 - type: ndcg_at_1 value: 20.544 - type: ndcg_at_10 value: 29.387999999999998 - type: ndcg_at_100 value: 34.603 - type: ndcg_at_1000 value: 37.564 - type: ndcg_at_3 value: 24.731 - type: ndcg_at_5 value: 26.773000000000003 - type: precision_at_1 value: 20.544 - type: precision_at_10 value: 5.509 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 11.757 - type: precision_at_5 value: 8.596 - type: recall_at_1 value: 17.01 - type: recall_at_10 value: 40.392 - type: recall_at_100 value: 64.043 - type: recall_at_1000 value: 85.031 - type: recall_at_3 value: 27.293 - type: recall_at_5 value: 32.586999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.155 - type: map_at_10 value: 35.92 - type: map_at_100 value: 37.034 - type: map_at_1000 value: 37.139 - type: map_at_3 value: 33.263999999999996 - type: map_at_5 value: 34.61 - type: mrr_at_1 value: 32.183 - type: mrr_at_10 value: 40.099000000000004 - type: mrr_at_100 value: 41.001 - type: mrr_at_1000 value: 41.059 - type: mrr_at_3 value: 37.889 - type: mrr_at_5 value: 39.007999999999996 - type: ndcg_at_1 value: 32.183 - type: ndcg_at_10 value: 41.127 - type: ndcg_at_100 value: 46.464 - type: ndcg_at_1000 value: 48.67 - type: ndcg_at_3 value: 36.396 - type: ndcg_at_5 value: 38.313 - type: precision_at_1 value: 32.183 - type: precision_at_10 value: 6.847 - type: precision_at_100 value: 1.0739999999999998 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 16.356 - type: precision_at_5 value: 11.362 - type: recall_at_1 value: 27.155 - type: recall_at_10 value: 52.922000000000004 - type: recall_at_100 value: 76.39 - type: recall_at_1000 value: 91.553 - type: recall_at_3 value: 39.745999999999995 - type: recall_at_5 value: 44.637 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.523 - type: map_at_10 value: 34.268 - type: map_at_100 value: 35.835 - type: map_at_1000 value: 36.046 - type: map_at_3 value: 31.662000000000003 - type: map_at_5 value: 32.71 - type: mrr_at_1 value: 31.028 - type: mrr_at_10 value: 38.924 - type: mrr_at_100 value: 39.95 - type: mrr_at_1000 value: 40.003 - type: mrr_at_3 value: 36.594 - type: mrr_at_5 value: 37.701 - type: ndcg_at_1 value: 31.028 - type: ndcg_at_10 value: 39.848 - type: ndcg_at_100 value: 45.721000000000004 - type: ndcg_at_1000 value: 48.424 - type: ndcg_at_3 value: 35.329 - type: ndcg_at_5 value: 36.779 - type: precision_at_1 value: 31.028 - type: precision_at_10 value: 7.51 - type: precision_at_100 value: 1.478 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 16.337 - type: precision_at_5 value: 11.383000000000001 - type: recall_at_1 value: 25.523 - type: recall_at_10 value: 50.735 - type: recall_at_100 value: 76.593 - type: recall_at_1000 value: 93.771 - type: recall_at_3 value: 37.574000000000005 - type: recall_at_5 value: 41.602 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.746000000000002 - type: map_at_10 value: 28.557 - type: map_at_100 value: 29.575000000000003 - type: map_at_1000 value: 29.659000000000002 - type: map_at_3 value: 25.753999999999998 - type: map_at_5 value: 27.254 - type: mrr_at_1 value: 22.736 - type: mrr_at_10 value: 30.769000000000002 - type: mrr_at_100 value: 31.655 - type: mrr_at_1000 value: 31.717000000000002 - type: mrr_at_3 value: 28.065 - type: mrr_at_5 value: 29.543999999999997 - type: ndcg_at_1 value: 22.736 - type: ndcg_at_10 value: 33.545 - type: ndcg_at_100 value: 38.743 - type: ndcg_at_1000 value: 41.002 - type: ndcg_at_3 value: 28.021 - type: ndcg_at_5 value: 30.586999999999996 - type: precision_at_1 value: 22.736 - type: precision_at_10 value: 5.416 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 11.953 - type: precision_at_5 value: 8.651 - type: recall_at_1 value: 20.746000000000002 - type: recall_at_10 value: 46.87 - type: recall_at_100 value: 71.25200000000001 - type: recall_at_1000 value: 88.26 - type: recall_at_3 value: 32.029999999999994 - type: recall_at_5 value: 38.21 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 12.105 - type: map_at_10 value: 20.577 - type: map_at_100 value: 22.686999999999998 - type: map_at_1000 value: 22.889 - type: map_at_3 value: 17.174 - type: map_at_5 value: 18.807 - type: mrr_at_1 value: 27.101 - type: mrr_at_10 value: 38.475 - type: mrr_at_100 value: 39.491 - type: mrr_at_1000 value: 39.525 - type: mrr_at_3 value: 34.886 - type: mrr_at_5 value: 36.922 - type: ndcg_at_1 value: 27.101 - type: ndcg_at_10 value: 29.002 - type: ndcg_at_100 value: 37.218 - type: ndcg_at_1000 value: 40.644000000000005 - type: ndcg_at_3 value: 23.464 - type: ndcg_at_5 value: 25.262 - type: precision_at_1 value: 27.101 - type: precision_at_10 value: 9.179 - type: precision_at_100 value: 1.806 - type: precision_at_1000 value: 0.244 - type: precision_at_3 value: 17.394000000000002 - type: precision_at_5 value: 13.342 - type: recall_at_1 value: 12.105 - type: recall_at_10 value: 35.143 - type: recall_at_100 value: 63.44499999999999 - type: recall_at_1000 value: 82.49499999999999 - type: recall_at_3 value: 21.489 - type: recall_at_5 value: 26.82 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.769 - type: map_at_10 value: 18.619 - type: map_at_100 value: 26.3 - type: map_at_1000 value: 28.063 - type: map_at_3 value: 13.746 - type: map_at_5 value: 16.035 - type: mrr_at_1 value: 65.25 - type: mrr_at_10 value: 73.678 - type: mrr_at_100 value: 73.993 - type: mrr_at_1000 value: 74.003 - type: mrr_at_3 value: 72.042 - type: mrr_at_5 value: 72.992 - type: ndcg_at_1 value: 53.625 - type: ndcg_at_10 value: 39.638 - type: ndcg_at_100 value: 44.601 - type: ndcg_at_1000 value: 52.80200000000001 - type: ndcg_at_3 value: 44.727 - type: ndcg_at_5 value: 42.199 - type: precision_at_1 value: 65.25 - type: precision_at_10 value: 31.025000000000002 - type: precision_at_100 value: 10.174999999999999 - type: precision_at_1000 value: 2.0740000000000003 - type: precision_at_3 value: 48.083 - type: precision_at_5 value: 40.6 - type: recall_at_1 value: 8.769 - type: recall_at_10 value: 23.910999999999998 - type: recall_at_100 value: 51.202999999999996 - type: recall_at_1000 value: 77.031 - type: recall_at_3 value: 15.387999999999998 - type: recall_at_5 value: 18.919 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 54.47 - type: f1 value: 48.21839043361556 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 63.564 - type: map_at_10 value: 74.236 - type: map_at_100 value: 74.53699999999999 - type: map_at_1000 value: 74.557 - type: map_at_3 value: 72.556 - type: map_at_5 value: 73.656 - type: mrr_at_1 value: 68.497 - type: mrr_at_10 value: 78.373 - type: mrr_at_100 value: 78.54299999999999 - type: mrr_at_1000 value: 78.549 - type: mrr_at_3 value: 77.03 - type: mrr_at_5 value: 77.938 - type: ndcg_at_1 value: 68.497 - type: ndcg_at_10 value: 79.12599999999999 - type: ndcg_at_100 value: 80.319 - type: ndcg_at_1000 value: 80.71199999999999 - type: ndcg_at_3 value: 76.209 - type: ndcg_at_5 value: 77.90700000000001 - type: precision_at_1 value: 68.497 - type: precision_at_10 value: 9.958 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.908 - type: precision_at_5 value: 18.971 - type: recall_at_1 value: 63.564 - type: recall_at_10 value: 90.05199999999999 - type: recall_at_100 value: 95.028 - type: recall_at_1000 value: 97.667 - type: recall_at_3 value: 82.17999999999999 - type: recall_at_5 value: 86.388 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 19.042 - type: map_at_10 value: 30.764999999999997 - type: map_at_100 value: 32.678000000000004 - type: map_at_1000 value: 32.881 - type: map_at_3 value: 26.525 - type: map_at_5 value: 28.932000000000002 - type: mrr_at_1 value: 37.653999999999996 - type: mrr_at_10 value: 46.597 - type: mrr_at_100 value: 47.413 - type: mrr_at_1000 value: 47.453 - type: mrr_at_3 value: 43.775999999999996 - type: mrr_at_5 value: 45.489000000000004 - type: ndcg_at_1 value: 37.653999999999996 - type: ndcg_at_10 value: 38.615 - type: ndcg_at_100 value: 45.513999999999996 - type: ndcg_at_1000 value: 48.815999999999995 - type: ndcg_at_3 value: 34.427 - type: ndcg_at_5 value: 35.954 - type: precision_at_1 value: 37.653999999999996 - type: precision_at_10 value: 10.864 - type: precision_at_100 value: 1.7850000000000001 - type: precision_at_1000 value: 0.23800000000000002 - type: precision_at_3 value: 22.788 - type: precision_at_5 value: 17.346 - type: recall_at_1 value: 19.042 - type: recall_at_10 value: 45.707 - type: recall_at_100 value: 71.152 - type: recall_at_1000 value: 90.7 - type: recall_at_3 value: 30.814000000000004 - type: recall_at_5 value: 37.478 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 38.001000000000005 - type: map_at_10 value: 59.611000000000004 - type: map_at_100 value: 60.582 - type: map_at_1000 value: 60.646 - type: map_at_3 value: 56.031 - type: map_at_5 value: 58.243 - type: mrr_at_1 value: 76.003 - type: mrr_at_10 value: 82.15400000000001 - type: mrr_at_100 value: 82.377 - type: mrr_at_1000 value: 82.383 - type: mrr_at_3 value: 81.092 - type: mrr_at_5 value: 81.742 - type: ndcg_at_1 value: 76.003 - type: ndcg_at_10 value: 68.216 - type: ndcg_at_100 value: 71.601 - type: ndcg_at_1000 value: 72.821 - type: ndcg_at_3 value: 63.109 - type: ndcg_at_5 value: 65.902 - type: precision_at_1 value: 76.003 - type: precision_at_10 value: 14.379 - type: precision_at_100 value: 1.702 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 40.396 - type: precision_at_5 value: 26.442 - type: recall_at_1 value: 38.001000000000005 - type: recall_at_10 value: 71.897 - type: recall_at_100 value: 85.105 - type: recall_at_1000 value: 93.133 - type: recall_at_3 value: 60.594 - type: recall_at_5 value: 66.104 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 91.31280000000001 - type: ap value: 87.53723467501632 - type: f1 value: 91.30282906596291 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.917 - type: map_at_10 value: 34.117999999999995 - type: map_at_100 value: 35.283 - type: map_at_1000 value: 35.333999999999996 - type: map_at_3 value: 30.330000000000002 - type: map_at_5 value: 32.461 - type: mrr_at_1 value: 22.579 - type: mrr_at_10 value: 34.794000000000004 - type: mrr_at_100 value: 35.893 - type: mrr_at_1000 value: 35.937000000000005 - type: mrr_at_3 value: 31.091 - type: mrr_at_5 value: 33.173 - type: ndcg_at_1 value: 22.579 - type: ndcg_at_10 value: 40.951 - type: ndcg_at_100 value: 46.558 - type: ndcg_at_1000 value: 47.803000000000004 - type: ndcg_at_3 value: 33.262 - type: ndcg_at_5 value: 37.036 - type: precision_at_1 value: 22.579 - type: precision_at_10 value: 6.463000000000001 - type: precision_at_100 value: 0.928 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.174000000000001 - type: precision_at_5 value: 10.421 - type: recall_at_1 value: 21.917 - type: recall_at_10 value: 61.885 - type: recall_at_100 value: 87.847 - type: recall_at_1000 value: 97.322 - type: recall_at_3 value: 41.010000000000005 - type: recall_at_5 value: 50.031000000000006 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.49521203830369 - type: f1 value: 93.30882341740241 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.0579115367077 - type: f1 value: 51.2368258319339 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.88029589778077 - type: f1 value: 72.34422048584663 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.2817753866846 - type: f1 value: 77.87746050004304 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.247341454119216 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.9647477166234 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.90698374676892 - type: mrr value: 33.07523683771251 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.717 - type: map_at_10 value: 14.566 - type: map_at_100 value: 18.465999999999998 - type: map_at_1000 value: 20.033 - type: map_at_3 value: 10.863 - type: map_at_5 value: 12.589 - type: mrr_at_1 value: 49.845 - type: mrr_at_10 value: 58.385 - type: mrr_at_100 value: 58.989999999999995 - type: mrr_at_1000 value: 59.028999999999996 - type: mrr_at_3 value: 56.76 - type: mrr_at_5 value: 57.766 - type: ndcg_at_1 value: 47.678 - type: ndcg_at_10 value: 37.511 - type: ndcg_at_100 value: 34.537 - type: ndcg_at_1000 value: 43.612 - type: ndcg_at_3 value: 43.713 - type: ndcg_at_5 value: 41.303 - type: precision_at_1 value: 49.845 - type: precision_at_10 value: 27.307 - type: precision_at_100 value: 8.746 - type: precision_at_1000 value: 2.182 - type: precision_at_3 value: 40.764 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.717 - type: recall_at_10 value: 18.107 - type: recall_at_100 value: 33.759 - type: recall_at_1000 value: 67.31 - type: recall_at_3 value: 11.68 - type: recall_at_5 value: 14.557999999999998 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 27.633999999999997 - type: map_at_10 value: 42.400999999999996 - type: map_at_100 value: 43.561 - type: map_at_1000 value: 43.592 - type: map_at_3 value: 37.865 - type: map_at_5 value: 40.650999999999996 - type: mrr_at_1 value: 31.286 - type: mrr_at_10 value: 44.996 - type: mrr_at_100 value: 45.889 - type: mrr_at_1000 value: 45.911 - type: mrr_at_3 value: 41.126000000000005 - type: mrr_at_5 value: 43.536 - type: ndcg_at_1 value: 31.257 - type: ndcg_at_10 value: 50.197 - type: ndcg_at_100 value: 55.062 - type: ndcg_at_1000 value: 55.81700000000001 - type: ndcg_at_3 value: 41.650999999999996 - type: ndcg_at_5 value: 46.324 - type: precision_at_1 value: 31.257 - type: precision_at_10 value: 8.508000000000001 - type: precision_at_100 value: 1.121 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.1 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 27.633999999999997 - type: recall_at_10 value: 71.40100000000001 - type: recall_at_100 value: 92.463 - type: recall_at_1000 value: 98.13199999999999 - type: recall_at_3 value: 49.382 - type: recall_at_5 value: 60.144 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.17099999999999 - type: map_at_10 value: 85.036 - type: map_at_100 value: 85.67099999999999 - type: map_at_1000 value: 85.68599999999999 - type: map_at_3 value: 82.086 - type: map_at_5 value: 83.956 - type: mrr_at_1 value: 82.04 - type: mrr_at_10 value: 88.018 - type: mrr_at_100 value: 88.114 - type: mrr_at_1000 value: 88.115 - type: mrr_at_3 value: 87.047 - type: mrr_at_5 value: 87.73100000000001 - type: ndcg_at_1 value: 82.03 - type: ndcg_at_10 value: 88.717 - type: ndcg_at_100 value: 89.904 - type: ndcg_at_1000 value: 89.991 - type: ndcg_at_3 value: 85.89099999999999 - type: ndcg_at_5 value: 87.485 - type: precision_at_1 value: 82.03 - type: precision_at_10 value: 13.444999999999999 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.537 - type: precision_at_5 value: 24.692 - type: recall_at_1 value: 71.17099999999999 - type: recall_at_10 value: 95.634 - type: recall_at_100 value: 99.614 - type: recall_at_1000 value: 99.99 - type: recall_at_3 value: 87.48 - type: recall_at_5 value: 91.996 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.067219624685315 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.121822992300444 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.153 - type: map_at_10 value: 11.024000000000001 - type: map_at_100 value: 13.233 - type: map_at_1000 value: 13.62 - type: map_at_3 value: 7.779999999999999 - type: map_at_5 value: 9.529 - type: mrr_at_1 value: 20.599999999999998 - type: mrr_at_10 value: 31.361 - type: mrr_at_100 value: 32.738 - type: mrr_at_1000 value: 32.792 - type: mrr_at_3 value: 28.15 - type: mrr_at_5 value: 30.085 - type: ndcg_at_1 value: 20.599999999999998 - type: ndcg_at_10 value: 18.583 - type: ndcg_at_100 value: 27.590999999999998 - type: ndcg_at_1000 value: 34.001 - type: ndcg_at_3 value: 17.455000000000002 - type: ndcg_at_5 value: 15.588 - type: precision_at_1 value: 20.599999999999998 - type: precision_at_10 value: 9.74 - type: precision_at_100 value: 2.284 - type: precision_at_1000 value: 0.381 - type: precision_at_3 value: 16.533 - type: precision_at_5 value: 14.02 - type: recall_at_1 value: 4.153 - type: recall_at_10 value: 19.738 - type: recall_at_100 value: 46.322 - type: recall_at_1000 value: 77.378 - type: recall_at_3 value: 10.048 - type: recall_at_5 value: 14.233 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.07097501003639 - type: cos_sim_spearman value: 81.05827848407056 - type: euclidean_pearson value: 82.6279003372546 - type: euclidean_spearman value: 81.00031515279802 - type: manhattan_pearson value: 82.59338284959495 - type: manhattan_spearman value: 80.97432711064945 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.28991993621685 - type: cos_sim_spearman value: 78.71828082424351 - type: euclidean_pearson value: 83.4881331520832 - type: euclidean_spearman value: 78.51746826842316 - type: manhattan_pearson value: 83.4109223774324 - type: manhattan_spearman value: 78.431544382179 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.16651661072123 - type: cos_sim_spearman value: 84.88094386637867 - type: euclidean_pearson value: 84.3547603585416 - type: euclidean_spearman value: 84.85148665860193 - type: manhattan_pearson value: 84.29648369879266 - type: manhattan_spearman value: 84.76074870571124 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40596254292149 - type: cos_sim_spearman value: 83.10699573133829 - type: euclidean_pearson value: 83.22794776876958 - type: euclidean_spearman value: 83.22583316084712 - type: manhattan_pearson value: 83.15899233935681 - type: manhattan_spearman value: 83.17668293648019 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.27977121352563 - type: cos_sim_spearman value: 88.73903130248591 - type: euclidean_pearson value: 88.30685958438735 - type: euclidean_spearman value: 88.79755484280406 - type: manhattan_pearson value: 88.30305607758652 - type: manhattan_spearman value: 88.80096577072784 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08819031430218 - type: cos_sim_spearman value: 86.35414445951125 - type: euclidean_pearson value: 85.4683192388315 - type: euclidean_spearman value: 86.2079674669473 - type: manhattan_pearson value: 85.35835702257341 - type: manhattan_spearman value: 86.08483380002187 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.36149449801478 - type: cos_sim_spearman value: 87.7102980757725 - type: euclidean_pearson value: 88.16457177837161 - type: euclidean_spearman value: 87.6598652482716 - type: manhattan_pearson value: 88.23894728971618 - type: manhattan_spearman value: 87.74470156709361 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.54023758394433 - type: cos_sim_spearman value: 66.28491960187773 - type: euclidean_pearson value: 67.0853128483472 - type: euclidean_spearman value: 66.10307543766307 - type: manhattan_pearson value: 66.7635365592556 - type: manhattan_spearman value: 65.76408004780167 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.15858398195317 - type: cos_sim_spearman value: 87.44850004752102 - type: euclidean_pearson value: 86.60737082550408 - type: euclidean_spearman value: 87.31591549824242 - type: manhattan_pearson value: 86.56187011429977 - type: manhattan_spearman value: 87.23854795795319 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.66210488769109 - type: mrr value: 96.23100664767331 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 56.094 - type: map_at_10 value: 67.486 - type: map_at_100 value: 67.925 - type: map_at_1000 value: 67.949 - type: map_at_3 value: 64.857 - type: map_at_5 value: 66.31 - type: mrr_at_1 value: 58.667 - type: mrr_at_10 value: 68.438 - type: mrr_at_100 value: 68.733 - type: mrr_at_1000 value: 68.757 - type: mrr_at_3 value: 66.389 - type: mrr_at_5 value: 67.456 - type: ndcg_at_1 value: 58.667 - type: ndcg_at_10 value: 72.506 - type: ndcg_at_100 value: 74.27 - type: ndcg_at_1000 value: 74.94800000000001 - type: ndcg_at_3 value: 67.977 - type: ndcg_at_5 value: 70.028 - type: precision_at_1 value: 58.667 - type: precision_at_10 value: 9.767000000000001 - type: precision_at_100 value: 1.073 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.0 - type: precision_at_5 value: 17.666999999999998 - type: recall_at_1 value: 56.094 - type: recall_at_10 value: 86.68900000000001 - type: recall_at_100 value: 94.333 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.522 - type: recall_at_5 value: 79.611 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.83069306930693 - type: cos_sim_ap value: 95.69184662911199 - type: cos_sim_f1 value: 91.4027149321267 - type: cos_sim_precision value: 91.91102123356926 - type: cos_sim_recall value: 90.9 - type: dot_accuracy value: 99.69405940594059 - type: dot_ap value: 90.21674151456216 - type: dot_f1 value: 84.4489179667841 - type: dot_precision value: 85.00506585612969 - type: dot_recall value: 83.89999999999999 - type: euclidean_accuracy value: 99.83069306930693 - type: euclidean_ap value: 95.67760109671087 - type: euclidean_f1 value: 91.19754350051177 - type: euclidean_precision value: 93.39622641509435 - type: euclidean_recall value: 89.1 - type: manhattan_accuracy value: 99.83267326732673 - type: manhattan_ap value: 95.69771347732625 - type: manhattan_f1 value: 91.32420091324201 - type: manhattan_precision value: 92.68795056642637 - type: manhattan_recall value: 90.0 - type: max_accuracy value: 99.83267326732673 - type: max_ap value: 95.69771347732625 - type: max_f1 value: 91.4027149321267 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 64.47378332953092 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.79602531604151 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.80707639107175 - type: mrr value: 54.64886522790935 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.852448373051395 - type: cos_sim_spearman value: 32.51821499493775 - type: dot_pearson value: 30.390650062190456 - type: dot_spearman value: 30.588836159667636 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.51 - type: map_at_100 value: 8.882 - type: map_at_1000 value: 22.181 - type: map_at_3 value: 0.553 - type: map_at_5 value: 0.843 - type: mrr_at_1 value: 74.0 - type: mrr_at_10 value: 84.89999999999999 - type: mrr_at_100 value: 84.89999999999999 - type: mrr_at_1000 value: 84.89999999999999 - type: mrr_at_3 value: 84.0 - type: mrr_at_5 value: 84.89999999999999 - type: ndcg_at_1 value: 68.0 - type: ndcg_at_10 value: 64.792 - type: ndcg_at_100 value: 51.37199999999999 - type: ndcg_at_1000 value: 47.392 - type: ndcg_at_3 value: 68.46900000000001 - type: ndcg_at_5 value: 67.084 - type: precision_at_1 value: 74.0 - type: precision_at_10 value: 69.39999999999999 - type: precision_at_100 value: 53.080000000000005 - type: precision_at_1000 value: 21.258 - type: precision_at_3 value: 76.0 - type: precision_at_5 value: 73.2 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.7950000000000002 - type: recall_at_100 value: 12.626999999999999 - type: recall_at_1000 value: 44.84 - type: recall_at_3 value: 0.611 - type: recall_at_5 value: 0.959 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.4949999999999999 - type: map_at_10 value: 8.797 - type: map_at_100 value: 14.889 - type: map_at_1000 value: 16.309 - type: map_at_3 value: 4.389 - type: map_at_5 value: 6.776 - type: mrr_at_1 value: 18.367 - type: mrr_at_10 value: 35.844 - type: mrr_at_100 value: 37.119 - type: mrr_at_1000 value: 37.119 - type: mrr_at_3 value: 30.612000000000002 - type: mrr_at_5 value: 33.163 - type: ndcg_at_1 value: 16.326999999999998 - type: ndcg_at_10 value: 21.9 - type: ndcg_at_100 value: 34.705000000000005 - type: ndcg_at_1000 value: 45.709 - type: ndcg_at_3 value: 22.7 - type: ndcg_at_5 value: 23.197000000000003 - type: precision_at_1 value: 18.367 - type: precision_at_10 value: 21.02 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.504 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 1.4949999999999999 - type: recall_at_10 value: 15.504000000000001 - type: recall_at_100 value: 47.978 - type: recall_at_1000 value: 81.56 - type: recall_at_3 value: 5.569 - type: recall_at_5 value: 9.821 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 72.99279999999999 - type: ap value: 15.459189680101492 - type: f1 value: 56.33023271441895 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.070175438596486 - type: f1 value: 63.28070758709465 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 50.076231309703054 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.21463908922931 - type: cos_sim_ap value: 77.67287017966282 - type: cos_sim_f1 value: 70.34412955465588 - type: cos_sim_precision value: 67.57413709285368 - type: cos_sim_recall value: 73.35092348284961 - type: dot_accuracy value: 85.04500208618943 - type: dot_ap value: 70.4075203869744 - type: dot_f1 value: 66.18172537008678 - type: dot_precision value: 64.08798813643104 - type: dot_recall value: 68.41688654353561 - type: euclidean_accuracy value: 87.17887584192646 - type: euclidean_ap value: 77.5774128274464 - type: euclidean_f1 value: 70.09307972480777 - type: euclidean_precision value: 71.70852884349986 - type: euclidean_recall value: 68.54881266490766 - type: manhattan_accuracy value: 87.28020504261787 - type: manhattan_ap value: 77.57835820297892 - type: manhattan_f1 value: 70.23063591521131 - type: manhattan_precision value: 70.97817299919159 - type: manhattan_recall value: 69.49868073878628 - type: max_accuracy value: 87.28020504261787 - type: max_ap value: 77.67287017966282 - type: max_f1 value: 70.34412955465588 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.96650754841464 - type: cos_sim_ap value: 86.00185968965064 - type: cos_sim_f1 value: 77.95861256351718 - type: cos_sim_precision value: 74.70712773465067 - type: cos_sim_recall value: 81.50600554357868 - type: dot_accuracy value: 87.36950362867233 - type: dot_ap value: 82.22071181147555 - type: dot_f1 value: 74.85680716698488 - type: dot_precision value: 71.54688377316114 - type: dot_recall value: 78.48783492454572 - type: euclidean_accuracy value: 88.99561454573679 - type: euclidean_ap value: 86.15882097229648 - type: euclidean_f1 value: 78.18463125322332 - type: euclidean_precision value: 74.95408956067241 - type: euclidean_recall value: 81.70619032953496 - type: manhattan_accuracy value: 88.96650754841464 - type: manhattan_ap value: 86.13133111232099 - type: manhattan_f1 value: 78.10771470160115 - type: manhattan_precision value: 74.05465084184377 - type: manhattan_recall value: 82.63012011087157 - type: max_accuracy value: 88.99561454573679 - type: max_ap value: 86.15882097229648 - type: max_f1 value: 78.18463125322332 language: - en license: mit --- **新闻 | News** **[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。 **[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。 **[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。 **[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。 **[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。 **[2023-09-11]** 开源stella-base-zh和stella-large-zh 欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见! ## stella model stella是一个通用的文本编码模型,主要有以下模型: | Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? | |:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | English | No | | stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No | | stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No | | stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes | | stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes | 完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。 **训练数据:** 1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本 2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据 **训练方法:** 1. 对比学习损失函数 2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例) 3. EWC(Elastic Weights Consolidation)[4] 4. cosent loss[5] 5. 每一种类型的数据一个迭代器,分别计算loss进行更新 stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction( 比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。 **初始权重:**\ stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position embedding使用层次分解位置编码[7]进行初始化。\ 感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。 stella is a general-purpose text encoder, which mainly includes the following models: | Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? | |:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | English | No | | stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No | | stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No | | stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes | | stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes | The training data mainly includes: 1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater than 512. 2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM. The loss functions mainly include: 1. Contrastive learning loss function 2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives) 3. EWC (Elastic Weights Consolidation) 4. cosent loss Model weight initialization:\ stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the 512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding. Training strategy:\ One iterator for each type of data, separately calculating the loss. Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation. ## Metric #### C-MTEB leaderboard (Chinese) | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) | |:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:| | stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 | | stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 | | stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 | | stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 | #### MTEB leaderboard (English) | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | |:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:| | stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 | #### Reproduce our results **C-MTEB:** ```python import torch import numpy as np from typing import List from mteb import MTEB from sentence_transformers import SentenceTransformer class FastTextEncoder(): def __init__(self, model_name): self.model = SentenceTransformer(model_name).cuda().half().eval() self.model.max_seq_length = 512 def encode( self, input_texts: List[str], *args, **kwargs ): new_sens = list(set(input_texts)) new_sens.sort(key=lambda x: len(x), reverse=True) vecs = self.model.encode( new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256 ).astype(np.float32) sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)} vecs = vecs[[sen2arrid[sen] for sen in input_texts]] torch.cuda.empty_cache() return vecs if __name__ == '__main__': model_name = "infgrad/stella-base-zh-v2" output_folder = "zh_mteb_results/stella-base-zh-v2" task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks] model = FastTextEncoder(model_name) for task in task_names: MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder) ``` **MTEB:** You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py) #### Evaluation for long text 经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的, 更致命的是那些长度大于512的文本,其重点都在前半部分 这里以CMRC2018的数据为例说明这个问题: ``` question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏? passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推...... ``` passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\ 简言之,现有数据集的2个问题:\ 1)长度大于512的过少\ 2)即便大于512,对于检索而言也只需要前512的文本内容\ 导致**无法准确评估模型的长文本编码能力。** 为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是: - CMRC2018,通用百科 - CAIL,法律阅读理解 - DRCD,繁体百科,已转简体 - Military,军工问答 - Squad,英文阅读理解,已转中文 - Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9] 处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。 除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing 评测指标为Recall@5, 结果如下: | Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh | |:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:| | CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 | | CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 | | DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 | | Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 | | Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 | | Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 | | **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** | **注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。 ## Usage #### stella 中文系列模型 stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致** ,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。 stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。 stella中文系列模型均使用mean pooling做为文本向量。 在sentence-transformer库中的使用方法: ```python from sentence_transformers import SentenceTransformer sentences = ["数据1", "数据2"] model = SentenceTransformer('infgrad/stella-base-zh-v2') print(model.max_seq_length) embeddings_1 = model.encode(sentences, normalize_embeddings=True) embeddings_2 = model.encode(sentences, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` 直接使用transformers库: ```python from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2') tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2') sentences = ["数据1", "数据ABCDEFGH"] batch_data = tokenizer( batch_text_or_text_pairs=sentences, padding="longest", return_tensors="pt", max_length=1024, truncation=True, ) attention_mask = batch_data["attention_mask"] model_output = model(**batch_data) last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] vectors = normalize(vectors, norm="l2", axis=1, ) print(vectors.shape) # 2,768 ``` #### stella models for English **Using Sentence-Transformers:** ```python from sentence_transformers import SentenceTransformer sentences = ["one car come", "one car go"] model = SentenceTransformer('infgrad/stella-base-en-v2') print(model.max_seq_length) embeddings_1 = model.encode(sentences, normalize_embeddings=True) embeddings_2 = model.encode(sentences, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` **Using HuggingFace Transformers:** ```python from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize model = AutoModel.from_pretrained('infgrad/stella-base-en-v2') tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2') sentences = ["one car come", "one car go"] batch_data = tokenizer( batch_text_or_text_pairs=sentences, padding="longest", return_tensors="pt", max_length=512, truncation=True, ) attention_mask = batch_data["attention_mask"] model_output = model(**batch_data) last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] vectors = normalize(vectors, norm="l2", axis=1, ) print(vectors.shape) # 2,768 ``` ## Training Detail **硬件:** 单卡A100-80GB **环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing **学习率:** 1e-6 **batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例 **数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。 ## ToDoList **评测的稳定性:** 评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。 **更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。 测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。 **OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere, 它们的效果均比不上BM25。 ## Reference 1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab 2. https://github.com/wangyuxinwhy/uniem 3. https://github.com/CLUEbenchmark/SimCLUE 4. https://arxiv.org/abs/1612.00796 5. https://kexue.fm/archives/8847 6. https://huggingface.co/sensenova/piccolo-base-zh 7. https://kexue.fm/archives/7947 8. https://github.com/FlagOpen/FlagEmbedding 9. https://github.com/THUDM/LongBench
austinmw/distilbert-base-uncased-finetuned-tweets-sentiment
austinmw
"2022-06-29T22:18:47Z"
9,571
2
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-06-29T21:23:06Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-tweets-sentiment results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: sentiment metrics: - name: Accuracy type: accuracy value: 0.7295 - name: F1 type: f1 value: 0.7303196028048928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tweets-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8192 - Accuracy: 0.7295 - F1: 0.7303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7126 | 1.0 | 713 | 0.6578 | 0.7185 | 0.7181 | | 0.5514 | 2.0 | 1426 | 0.6249 | 0.7005 | 0.7046 | | 0.4406 | 3.0 | 2139 | 0.7053 | 0.731 | 0.7296 | | 0.3511 | 4.0 | 2852 | 0.7580 | 0.718 | 0.7180 | | 0.2809 | 5.0 | 3565 | 0.8192 | 0.7295 | 0.7303 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
mradermacher/TransLLaMA2-7B-GGUF
mradermacher
"2024-06-26T20:27:27Z"
9,569
0
transformers
[ "transformers", "gguf", "en", "base_model:TransLLaMA/TransLLaMA2-7B", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-25T01:40:33Z"
--- base_model: TransLLaMA/TransLLaMA2-7B language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TransLLaMA/TransLLaMA2-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TransLLaMA2-7B-GGUF/resolve/main/TransLLaMA2-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mistral-community/Mixtral-8x22B-v0.1
mistral-community
"2024-07-01T08:55:22Z"
9,567
672
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "fr", "it", "de", "es", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-10T08:43:36Z"
--- language: - fr - it - de - es - en license: apache-2.0 tags: - moe model-index: - name: Mixtral-8x22B-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.08 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 74.15 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1 name: Open LLM Leaderboard --- # Mixtral-8x22B > [!WARNING] > This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org > [!TIP] > MistralAI has uploaded weights to their organization at [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) too. > [!TIP] > Kudos to [@v2ray](https://huggingface.co/v2ray) for converting the checkpoints and uploading them in `transformers` compatible format. Go give them a follow! Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py). The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistral-community/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistral-community/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistral-community/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistral-community/Mixtral-8x22B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Notice Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistral-community__Mixtral-8x22B-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |74.46| |AI2 Reasoning Challenge (25-Shot)|70.48| |HellaSwag (10-Shot) |88.73| |MMLU (5-Shot) |77.81| |TruthfulQA (0-shot) |51.08| |Winogrande (5-shot) |84.53| |GSM8k (5-shot) |74.15|