modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
OFA-Sys/chinese-clip-vit-large-patch14
OFA-Sys
"2022-12-09T06:10:40Z"
6,323
22
transformers
[ "transformers", "pytorch", "chinese_clip", "zero-shot-image-classification", "vision", "arxiv:2211.01335", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2022-11-09T09:22:14Z"
--- tags: - vision widget: - src: https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/resolve/main/festival.jpg candidate_labels: 灯笼, 鞭炮, 对联 example_title: festival - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: 音乐表演, 体育运动 example_title: cat & dog - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg candidate_labels: 梅西, C罗, 马奎尔 example_title: football --- # Chinese-CLIP-ViT-Large-Patch14 ## Introduction This is the large-version of the Chinese CLIP, with ViT-L/14 as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP (Welcome to star! 🔥🔥) ## Use with the official API We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities. ```python from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14") processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14") url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" image = Image.open(requests.get(url, stream=True).raw) # Squirtle, Bulbasaur, Charmander, Pikachu in English texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"] # compute image feature inputs = processor(images=image, return_tensors="pt") image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute text features inputs = processor(text=texts, padding=True, return_tensors="pt") text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute image-text similarity scores inputs = processor(text=texts, images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # probs: [[0.0066, 0.0211, 0.0031, 0.9692]] ``` However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference. <br><br> ## Results **MUGE Text-to-Image Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Setup</th><th colspan="4">Zero-shot</th><th colspan="4">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>49.5</td><td>75.7</td><td>83.2</td><td>69.5</td><td>60.1</td><td>82.9</td><td>89.4</td><td>77.5</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>63.0</td><td>84.1</td><td>89.2</td><td>78.8</td><td>68.9</td><td>88.7</td><td>93.1</td><td>83.6</td> </tr> </table> <br> **Flickr30K-CN Retrieval**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>51.7</td><td>78.9</td><td>86.3</td><td>77.4</td><td>94.5</td><td>97.0</td><td>76.1</td><td>94.8</td><td>97.5</td><td>92.7</td><td>99.1</td><td>99.6</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>60.9</td><td>86.8</td><td>92.7</td><td>84.4</td><td>96.7</td><td>98.4</td><td>77.6</td><td>96.7</td><td>98.9</td><td>95.6</td><td>99.8</td><td>100.0</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>71.2</td><td>91.4</td><td>95.5</td><td>83.8</td><td>96.9</td><td>98.6</td><td>81.6</td><td>97.5</td><td>98.8</td><td>95.3</td><td>99.7</td><td>100.0</td> </tr> </table> <br> **COCO-CN Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>53.4</td><td>80.2</td><td>90.1</td><td>74.0</td><td>94.4</td><td>98.1</td><td>55.2</td><td>81.0</td><td>90.6</td><td>73.3</td><td>94.0</td><td>98.0</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>56.4</td><td>85.0</td><td>93.1</td><td>79.1</td><td>96.5</td><td>98.9</td><td>63.3</td><td>89.3</td><td>95.7</td><td>79.3</td><td>97.1</td><td>98.7</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>69.2</td><td>89.9</td><td>96.1</td><td>81.5</td><td>96.9</td><td>99.1</td><td>63.0</td><td>86.6</td><td>92.9</td><td>83.5</td><td>97.3</td><td>99.2</td> </tr> </table> <br> **Zero-shot Image Classification**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th>CIFAR10</th><th>CIFAR100</th><th>DTD</th><th>EuroSAT</th><th>FER</th><th>FGVC</th><th>KITTI</th><th>MNIST</th><th>PC</th><th>VOC</th> </tr> <tr align="center"> <td width="150%">GIT</td><td>88.5</td><td>61.1</td><td>42.9</td><td>43.4</td><td>41.4</td><td>6.7</td><td>22.1</td><td>68.9</td><td>50.0</td><td>80.2</td> </tr> <tr align="center"> <td width="150%">ALIGN</td><td>94.9</td><td>76.8</td><td>66.1</td><td>52.1</td><td>50.8</td><td>25.0</td><td>41.2</td><td>74.0</td><td>55.2</td><td>83.0</td> </tr> <tr align="center"> <td width="150%">CLIP</td><td>94.9</td><td>77.0</td><td>56.0</td><td>63.0</td><td>48.3</td><td>33.3</td><td>11.5</td><td>79.0</td><td>62.3</td><td>84.0</td> </tr> <tr align="center"> <td width="150%">Wukong</td><td>95.4</td><td>77.1</td><td>40.9</td><td>50.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td> </tr> <tr align="center"> <td width="150%">CN-CLIP</td><td>96.0</td><td>79.7</td><td>51.2</td><td>52.0</td><td>55.1</td><td>26.2</td><td>49.9</td><td>79.4</td><td>63.5</td><td>84.9</td> </tr> </table> <br> ## Citation If you find Chinese CLIP helpful, feel free to cite our paper. Thanks for your support! ``` @article{chinese-clip, title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese}, author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang}, journal={arXiv preprint arXiv:2211.01335}, year={2022} } ``` <br>
stelterlab/Codestral-22B-v0.1-AWQ
stelterlab
"2024-05-30T12:57:54Z"
6,320
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-05-30T12:22:32Z"
--- license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code --- **This is a quantized version of Mistral AI's [Codestral-22B-v0.1](imistral-community/Codestral-22B-v0.1) (see below).** **Quantization done with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ/).** # Model Card for Codestral-22B-v0.1 Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried: - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code) ## Installation It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference). ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256 ``` Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines: ``` Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number. fn fibonacci(n: u32) -> u32 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { let n = 10; println!("The {}th Fibonacci number is: {}", n, fibonacci(n)); } This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers. ``` ### Fill-in-the-middle (FIM) After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed: ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.request import FIMRequest tokenizer = MistralTokenizer.v3() model = Transformer.from_folder("~/codestral-22B-240529") prefix = """def add(""" suffix = """ return sum""" request = FIMRequest(prompt=prefix, suffix=suffix) tokens = tokenizer.encode_fim(request).tokens out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) middle = result.split(suffix)[0].strip() print(middle) ``` Should give something along the following lines: ``` num1, num2): # Add two numbers sum = num1 + num2 # return the sum ``` ## Limitations The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Codestral-22B-v0.1 is released under the `MNLP-0.1` license. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
jinaai/jina-embedding-b-en-v1
jinaai
"2023-10-13T12:44:18Z"
6,316
6
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "finetuner", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:jinaai/negation-dataset", "arxiv:2307.11224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-07-07T07:51:59Z"
--- pipeline_tag: sentence-similarity tags: - finetuner - sentence-transformers - feature-extraction - sentence-similarity - mteb datasets: - jinaai/negation-dataset language: en license: apache-2.0 model-index: - name: jina-embedding-b-en-v1 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.73134328358208 - type: ap value: 28.30575908745204 - type: f1 value: 60.02420130946191 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 67.6068 - type: ap value: 63.5899352938589 - type: f1 value: 65.64285334357656 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.178 - type: f1 value: 29.68460843733487 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 24.964 - type: map_at_10 value: 40.217999999999996 - type: map_at_100 value: 41.263 - type: map_at_1000 value: 41.277 - type: map_at_3 value: 35.183 - type: map_at_5 value: 38.045 - type: mrr_at_1 value: 25.107000000000003 - type: mrr_at_10 value: 40.272999999999996 - type: mrr_at_100 value: 41.318 - type: mrr_at_1000 value: 41.333 - type: mrr_at_3 value: 35.242000000000004 - type: mrr_at_5 value: 38.101 - type: ndcg_at_1 value: 24.964 - type: ndcg_at_10 value: 49.006 - type: ndcg_at_100 value: 53.446000000000005 - type: ndcg_at_1000 value: 53.813 - type: ndcg_at_3 value: 38.598 - type: ndcg_at_5 value: 43.74 - type: precision_at_1 value: 24.964 - type: precision_at_10 value: 7.724 - type: precision_at_100 value: 0.966 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 16.169 - type: precision_at_5 value: 12.191 - type: recall_at_1 value: 24.964 - type: recall_at_10 value: 77.24 - type: recall_at_100 value: 96.586 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 48.506 - type: recall_at_5 value: 60.953 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.25203906042786 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 29.07648348376354 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.4029266143623 - type: mrr value: 75.45750340764191 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.92280995704714 - type: cos_sim_spearman value: 83.58082010833608 - type: euclidean_pearson value: 48.64744162695948 - type: euclidean_spearman value: 48.817377397301556 - type: manhattan_pearson value: 48.87684776623195 - type: manhattan_spearman value: 48.94268145725884 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.05519480519482 - type: f1 value: 83.94978356890618 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 32.2033276486685 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 26.631954164406014 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.625 - type: map_at_10 value: 40.037 - type: map_at_100 value: 41.52 - type: map_at_1000 value: 41.654 - type: map_at_3 value: 36.818 - type: map_at_5 value: 38.426 - type: mrr_at_1 value: 35.336 - type: mrr_at_10 value: 45.395 - type: mrr_at_100 value: 46.221000000000004 - type: mrr_at_1000 value: 46.264 - type: mrr_at_3 value: 42.823 - type: mrr_at_5 value: 44.204 - type: ndcg_at_1 value: 35.336 - type: ndcg_at_10 value: 46.326 - type: ndcg_at_100 value: 51.795 - type: ndcg_at_1000 value: 53.834 - type: ndcg_at_3 value: 41.299 - type: ndcg_at_5 value: 43.247 - type: precision_at_1 value: 35.336 - type: precision_at_10 value: 8.627 - type: precision_at_100 value: 1.428 - type: precision_at_1000 value: 0.197 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.733999999999998 - type: recall_at_1 value: 29.625 - type: recall_at_10 value: 59.165 - type: recall_at_100 value: 81.675 - type: recall_at_1000 value: 94.17 - type: recall_at_3 value: 44.485 - type: recall_at_5 value: 50.198 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.687 - type: map_at_10 value: 36.062 - type: map_at_100 value: 37.263000000000005 - type: map_at_1000 value: 37.397999999999996 - type: map_at_3 value: 32.967 - type: map_at_5 value: 34.75 - type: mrr_at_1 value: 33.885 - type: mrr_at_10 value: 42.632999999999996 - type: mrr_at_100 value: 43.305 - type: mrr_at_1000 value: 43.354 - type: mrr_at_3 value: 39.958 - type: mrr_at_5 value: 41.63 - type: ndcg_at_1 value: 33.885 - type: ndcg_at_10 value: 42.001 - type: ndcg_at_100 value: 46.436 - type: ndcg_at_1000 value: 48.774 - type: ndcg_at_3 value: 37.183 - type: ndcg_at_5 value: 39.605000000000004 - type: precision_at_1 value: 33.885 - type: precision_at_10 value: 7.962 - type: precision_at_100 value: 1.283 - type: precision_at_1000 value: 0.18 - type: precision_at_3 value: 17.855999999999998 - type: precision_at_5 value: 13.083 - type: recall_at_1 value: 26.687 - type: recall_at_10 value: 52.75 - type: recall_at_100 value: 71.324 - type: recall_at_1000 value: 86.356 - type: recall_at_3 value: 38.83 - type: recall_at_5 value: 45.23 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 34.02 - type: map_at_10 value: 45.751999999999995 - type: map_at_100 value: 46.867 - type: map_at_1000 value: 46.93 - type: map_at_3 value: 42.409 - type: map_at_5 value: 44.464999999999996 - type: mrr_at_1 value: 38.307 - type: mrr_at_10 value: 48.718 - type: mrr_at_100 value: 49.509 - type: mrr_at_1000 value: 49.542 - type: mrr_at_3 value: 46.007999999999996 - type: mrr_at_5 value: 47.766999999999996 - type: ndcg_at_1 value: 38.307 - type: ndcg_at_10 value: 51.666999999999994 - type: ndcg_at_100 value: 56.242000000000004 - type: ndcg_at_1000 value: 57.477999999999994 - type: ndcg_at_3 value: 45.912 - type: ndcg_at_5 value: 49.106 - type: precision_at_1 value: 38.307 - type: precision_at_10 value: 8.476 - type: precision_at_100 value: 1.176 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 20.522000000000002 - type: precision_at_5 value: 14.557999999999998 - type: recall_at_1 value: 34.02 - type: recall_at_10 value: 66.046 - type: recall_at_100 value: 85.817 - type: recall_at_1000 value: 94.453 - type: recall_at_3 value: 51.059 - type: recall_at_5 value: 58.667 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.939 - type: map_at_10 value: 32.627 - type: map_at_100 value: 33.617999999999995 - type: map_at_1000 value: 33.701 - type: map_at_3 value: 30.11 - type: map_at_5 value: 31.380000000000003 - type: mrr_at_1 value: 25.989 - type: mrr_at_10 value: 34.655 - type: mrr_at_100 value: 35.502 - type: mrr_at_1000 value: 35.563 - type: mrr_at_3 value: 32.109 - type: mrr_at_5 value: 33.426 - type: ndcg_at_1 value: 25.989 - type: ndcg_at_10 value: 37.657000000000004 - type: ndcg_at_100 value: 42.467 - type: ndcg_at_1000 value: 44.677 - type: ndcg_at_3 value: 32.543 - type: ndcg_at_5 value: 34.74 - type: precision_at_1 value: 25.989 - type: precision_at_10 value: 5.876 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 13.861 - type: precision_at_5 value: 9.626999999999999 - type: recall_at_1 value: 23.939 - type: recall_at_10 value: 51.28 - type: recall_at_100 value: 73.428 - type: recall_at_1000 value: 90.309 - type: recall_at_3 value: 37.245 - type: recall_at_5 value: 42.541000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.082 - type: map_at_10 value: 22.486 - type: map_at_100 value: 23.687 - type: map_at_1000 value: 23.807000000000002 - type: map_at_3 value: 20.076 - type: map_at_5 value: 21.362000000000002 - type: mrr_at_1 value: 18.532 - type: mrr_at_10 value: 26.605 - type: mrr_at_100 value: 27.628999999999998 - type: mrr_at_1000 value: 27.698 - type: mrr_at_3 value: 23.964 - type: mrr_at_5 value: 25.319000000000003 - type: ndcg_at_1 value: 18.532 - type: ndcg_at_10 value: 27.474999999999998 - type: ndcg_at_100 value: 33.357 - type: ndcg_at_1000 value: 36.361 - type: ndcg_at_3 value: 22.851 - type: ndcg_at_5 value: 24.87 - type: precision_at_1 value: 18.532 - type: precision_at_10 value: 5.210999999999999 - type: precision_at_100 value: 0.9329999999999999 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 11.235000000000001 - type: precision_at_5 value: 8.134 - type: recall_at_1 value: 15.082 - type: recall_at_10 value: 38.759 - type: recall_at_100 value: 64.621 - type: recall_at_1000 value: 86.162 - type: recall_at_3 value: 26.055 - type: recall_at_5 value: 31.208999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.759999999999998 - type: map_at_10 value: 33.706 - type: map_at_100 value: 35.0 - type: map_at_1000 value: 35.134 - type: map_at_3 value: 30.789 - type: map_at_5 value: 32.427 - type: mrr_at_1 value: 29.548000000000002 - type: mrr_at_10 value: 38.521 - type: mrr_at_100 value: 39.432 - type: mrr_at_1000 value: 39.494 - type: mrr_at_3 value: 35.691 - type: mrr_at_5 value: 37.424 - type: ndcg_at_1 value: 29.548000000000002 - type: ndcg_at_10 value: 39.301 - type: ndcg_at_100 value: 44.907000000000004 - type: ndcg_at_1000 value: 47.494 - type: ndcg_at_3 value: 34.08 - type: ndcg_at_5 value: 36.649 - type: precision_at_1 value: 29.548000000000002 - type: precision_at_10 value: 7.084 - type: precision_at_100 value: 1.169 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 15.881 - type: precision_at_5 value: 11.53 - type: recall_at_1 value: 24.759999999999998 - type: recall_at_10 value: 51.202000000000005 - type: recall_at_100 value: 74.542 - type: recall_at_1000 value: 91.669 - type: recall_at_3 value: 36.892 - type: recall_at_5 value: 43.333 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.247999999999998 - type: map_at_10 value: 31.878 - type: map_at_100 value: 33.135 - type: map_at_1000 value: 33.263999999999996 - type: map_at_3 value: 29.406 - type: map_at_5 value: 30.602 - type: mrr_at_1 value: 28.767 - type: mrr_at_10 value: 36.929 - type: mrr_at_100 value: 37.844 - type: mrr_at_1000 value: 37.913000000000004 - type: mrr_at_3 value: 34.589 - type: mrr_at_5 value: 35.908 - type: ndcg_at_1 value: 28.767 - type: ndcg_at_10 value: 37.172 - type: ndcg_at_100 value: 42.842 - type: ndcg_at_1000 value: 45.534 - type: ndcg_at_3 value: 32.981 - type: ndcg_at_5 value: 34.628 - type: precision_at_1 value: 28.767 - type: precision_at_10 value: 6.678000000000001 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 15.715000000000002 - type: precision_at_5 value: 10.913 - type: recall_at_1 value: 23.247999999999998 - type: recall_at_10 value: 48.16 - type: recall_at_100 value: 72.753 - type: recall_at_1000 value: 90.8 - type: recall_at_3 value: 35.961999999999996 - type: recall_at_5 value: 40.504 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.825583333333334 - type: map_at_10 value: 32.2845 - type: map_at_100 value: 33.48566666666667 - type: map_at_1000 value: 33.60833333333333 - type: map_at_3 value: 29.604916666666664 - type: map_at_5 value: 31.015333333333334 - type: mrr_at_1 value: 27.850916666666663 - type: mrr_at_10 value: 36.122416666666666 - type: mrr_at_100 value: 37.01275 - type: mrr_at_1000 value: 37.07566666666667 - type: mrr_at_3 value: 33.665749999999996 - type: mrr_at_5 value: 35.00916666666667 - type: ndcg_at_1 value: 27.850916666666663 - type: ndcg_at_10 value: 37.47625 - type: ndcg_at_100 value: 42.74433333333334 - type: ndcg_at_1000 value: 45.21991666666667 - type: ndcg_at_3 value: 32.70916666666667 - type: ndcg_at_5 value: 34.80658333333333 - type: precision_at_1 value: 27.850916666666663 - type: precision_at_10 value: 6.5761666666666665 - type: precision_at_100 value: 1.0879999999999999 - type: precision_at_1000 value: 0.15058333333333332 - type: precision_at_3 value: 14.933833333333336 - type: precision_at_5 value: 10.607249999999999 - type: recall_at_1 value: 23.825583333333334 - type: recall_at_10 value: 49.100500000000004 - type: recall_at_100 value: 72.21133333333334 - type: recall_at_1000 value: 89.34791666666666 - type: recall_at_3 value: 35.90525 - type: recall_at_5 value: 41.24583333333334 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.343 - type: map_at_10 value: 27.313 - type: map_at_100 value: 28.316999999999997 - type: map_at_1000 value: 28.406 - type: map_at_3 value: 25.06 - type: map_at_5 value: 26.409 - type: mrr_at_1 value: 23.313 - type: mrr_at_10 value: 29.467 - type: mrr_at_100 value: 30.348999999999997 - type: mrr_at_1000 value: 30.42 - type: mrr_at_3 value: 27.173000000000002 - type: mrr_at_5 value: 28.461 - type: ndcg_at_1 value: 23.313 - type: ndcg_at_10 value: 31.183 - type: ndcg_at_100 value: 36.252 - type: ndcg_at_1000 value: 38.582 - type: ndcg_at_3 value: 26.838 - type: ndcg_at_5 value: 29.042 - type: precision_at_1 value: 23.313 - type: precision_at_10 value: 4.9079999999999995 - type: precision_at_100 value: 0.808 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 11.299 - type: precision_at_5 value: 8.097999999999999 - type: recall_at_1 value: 21.343 - type: recall_at_10 value: 41.047 - type: recall_at_100 value: 64.372 - type: recall_at_1000 value: 81.499 - type: recall_at_3 value: 29.337000000000003 - type: recall_at_5 value: 34.756 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.595 - type: map_at_10 value: 23.433 - type: map_at_100 value: 24.578 - type: map_at_1000 value: 24.709999999999997 - type: map_at_3 value: 21.268 - type: map_at_5 value: 22.393 - type: mrr_at_1 value: 20.131 - type: mrr_at_10 value: 27.026 - type: mrr_at_100 value: 28.003 - type: mrr_at_1000 value: 28.083999999999996 - type: mrr_at_3 value: 24.966 - type: mrr_at_5 value: 26.064999999999998 - type: ndcg_at_1 value: 20.131 - type: ndcg_at_10 value: 27.846 - type: ndcg_at_100 value: 33.318999999999996 - type: ndcg_at_1000 value: 36.403 - type: ndcg_at_3 value: 23.883 - type: ndcg_at_5 value: 25.595000000000002 - type: precision_at_1 value: 20.131 - type: precision_at_10 value: 5.034000000000001 - type: precision_at_100 value: 0.9079999999999999 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.23 - type: precision_at_5 value: 8.032 - type: recall_at_1 value: 16.595 - type: recall_at_10 value: 37.576 - type: recall_at_100 value: 62.044 - type: recall_at_1000 value: 83.97 - type: recall_at_3 value: 26.631 - type: recall_at_5 value: 31.002000000000002 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.85 - type: map_at_10 value: 32.762 - type: map_at_100 value: 33.896 - type: map_at_1000 value: 34.006 - type: map_at_3 value: 29.965000000000003 - type: map_at_5 value: 31.485999999999997 - type: mrr_at_1 value: 28.731 - type: mrr_at_10 value: 36.504999999999995 - type: mrr_at_100 value: 37.364999999999995 - type: mrr_at_1000 value: 37.431 - type: mrr_at_3 value: 34.033 - type: mrr_at_5 value: 35.4 - type: ndcg_at_1 value: 28.731 - type: ndcg_at_10 value: 37.788 - type: ndcg_at_100 value: 43.1 - type: ndcg_at_1000 value: 45.623999999999995 - type: ndcg_at_3 value: 32.717 - type: ndcg_at_5 value: 35.024 - type: precision_at_1 value: 28.731 - type: precision_at_10 value: 6.371 - type: precision_at_100 value: 1.02 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 14.521 - type: precision_at_5 value: 10.41 - type: recall_at_1 value: 24.85 - type: recall_at_10 value: 49.335 - type: recall_at_100 value: 72.792 - type: recall_at_1000 value: 90.525 - type: recall_at_3 value: 35.698 - type: recall_at_5 value: 41.385 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.016000000000002 - type: map_at_10 value: 32.126 - type: map_at_100 value: 33.786 - type: map_at_1000 value: 34.012 - type: map_at_3 value: 29.256 - type: map_at_5 value: 30.552 - type: mrr_at_1 value: 27.272999999999996 - type: mrr_at_10 value: 35.967 - type: mrr_at_100 value: 37.082 - type: mrr_at_1000 value: 37.146 - type: mrr_at_3 value: 33.531 - type: mrr_at_5 value: 34.697 - type: ndcg_at_1 value: 27.272999999999996 - type: ndcg_at_10 value: 37.945 - type: ndcg_at_100 value: 43.928 - type: ndcg_at_1000 value: 46.772999999999996 - type: ndcg_at_3 value: 33.111000000000004 - type: ndcg_at_5 value: 34.794000000000004 - type: precision_at_1 value: 27.272999999999996 - type: precision_at_10 value: 7.53 - type: precision_at_100 value: 1.512 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 11.146 - type: recall_at_1 value: 23.016000000000002 - type: recall_at_10 value: 49.576 - type: recall_at_100 value: 75.74600000000001 - type: recall_at_1000 value: 94.069 - type: recall_at_3 value: 35.964 - type: recall_at_5 value: 40.455999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.742 - type: map_at_10 value: 29.232000000000003 - type: map_at_100 value: 30.160999999999998 - type: map_at_1000 value: 30.278 - type: map_at_3 value: 27.134999999999998 - type: map_at_5 value: 27.932000000000002 - type: mrr_at_1 value: 24.399 - type: mrr_at_10 value: 31.048 - type: mrr_at_100 value: 31.912000000000003 - type: mrr_at_1000 value: 31.999 - type: mrr_at_3 value: 29.144 - type: mrr_at_5 value: 29.809 - type: ndcg_at_1 value: 24.399 - type: ndcg_at_10 value: 33.354 - type: ndcg_at_100 value: 38.287 - type: ndcg_at_1000 value: 41.105000000000004 - type: ndcg_at_3 value: 29.112 - type: ndcg_at_5 value: 30.379 - type: precision_at_1 value: 24.399 - type: precision_at_10 value: 5.157 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 11.892 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 22.742 - type: recall_at_10 value: 44.31 - type: recall_at_100 value: 67.422 - type: recall_at_1000 value: 88.193 - type: recall_at_3 value: 32.705 - type: recall_at_5 value: 35.669000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 9.067 - type: map_at_10 value: 14.821000000000002 - type: map_at_100 value: 16.195 - type: map_at_1000 value: 16.359 - type: map_at_3 value: 12.666 - type: map_at_5 value: 13.675999999999998 - type: mrr_at_1 value: 20.326 - type: mrr_at_10 value: 29.798000000000002 - type: mrr_at_100 value: 30.875000000000004 - type: mrr_at_1000 value: 30.928 - type: mrr_at_3 value: 26.678 - type: mrr_at_5 value: 28.433000000000003 - type: ndcg_at_1 value: 20.326 - type: ndcg_at_10 value: 21.477 - type: ndcg_at_100 value: 27.637 - type: ndcg_at_1000 value: 30.953000000000003 - type: ndcg_at_3 value: 17.456 - type: ndcg_at_5 value: 18.789 - type: precision_at_1 value: 20.326 - type: precision_at_10 value: 6.482 - type: precision_at_100 value: 1.302 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 12.53 - type: precision_at_5 value: 9.603 - type: recall_at_1 value: 9.067 - type: recall_at_10 value: 26.246000000000002 - type: recall_at_100 value: 47.837 - type: recall_at_1000 value: 66.637 - type: recall_at_3 value: 16.468 - type: recall_at_5 value: 20.088 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.563000000000001 - type: map_at_10 value: 15.22 - type: map_at_100 value: 20.048 - type: map_at_1000 value: 21.17 - type: map_at_3 value: 11.627 - type: map_at_5 value: 13.239 - type: mrr_at_1 value: 56.25 - type: mrr_at_10 value: 64.846 - type: mrr_at_100 value: 65.405 - type: mrr_at_1000 value: 65.41799999999999 - type: mrr_at_3 value: 63.125 - type: mrr_at_5 value: 64.1 - type: ndcg_at_1 value: 45.0 - type: ndcg_at_10 value: 32.437 - type: ndcg_at_100 value: 35.483 - type: ndcg_at_1000 value: 42.186 - type: ndcg_at_3 value: 37.297000000000004 - type: ndcg_at_5 value: 34.697 - type: precision_at_1 value: 56.25 - type: precision_at_10 value: 25.15 - type: precision_at_100 value: 7.539999999999999 - type: precision_at_1000 value: 1.678 - type: precision_at_3 value: 40.666999999999994 - type: precision_at_5 value: 33.45 - type: recall_at_1 value: 7.563000000000001 - type: recall_at_10 value: 19.969 - type: recall_at_100 value: 40.113 - type: recall_at_1000 value: 61.72299999999999 - type: recall_at_3 value: 12.950999999999999 - type: recall_at_5 value: 15.690999999999999 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.675000000000004 - type: f1 value: 40.779372586075105 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 57.406 - type: map_at_10 value: 67.69500000000001 - type: map_at_100 value: 68.08 - type: map_at_1000 value: 68.095 - type: map_at_3 value: 65.688 - type: map_at_5 value: 66.93 - type: mrr_at_1 value: 61.941 - type: mrr_at_10 value: 72.513 - type: mrr_at_100 value: 72.83699999999999 - type: mrr_at_1000 value: 72.844 - type: mrr_at_3 value: 70.60499999999999 - type: mrr_at_5 value: 71.807 - type: ndcg_at_1 value: 61.941 - type: ndcg_at_10 value: 73.29 - type: ndcg_at_100 value: 74.96300000000001 - type: ndcg_at_1000 value: 75.28200000000001 - type: ndcg_at_3 value: 69.491 - type: ndcg_at_5 value: 71.573 - type: precision_at_1 value: 61.941 - type: precision_at_10 value: 9.388 - type: precision_at_100 value: 1.0290000000000001 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 27.423 - type: precision_at_5 value: 17.627000000000002 - type: recall_at_1 value: 57.406 - type: recall_at_10 value: 85.975 - type: recall_at_100 value: 93.29899999999999 - type: recall_at_1000 value: 95.531 - type: recall_at_3 value: 75.624 - type: recall_at_5 value: 80.78999999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 16.314999999999998 - type: map_at_10 value: 26.678 - type: map_at_100 value: 28.322000000000003 - type: map_at_1000 value: 28.519 - type: map_at_3 value: 23.105 - type: map_at_5 value: 24.808 - type: mrr_at_1 value: 33.333 - type: mrr_at_10 value: 41.453 - type: mrr_at_100 value: 42.339 - type: mrr_at_1000 value: 42.39 - type: mrr_at_3 value: 38.863 - type: mrr_at_5 value: 40.159 - type: ndcg_at_1 value: 33.333 - type: ndcg_at_10 value: 34.062 - type: ndcg_at_100 value: 40.595 - type: ndcg_at_1000 value: 44.124 - type: ndcg_at_3 value: 30.689 - type: ndcg_at_5 value: 31.255 - type: precision_at_1 value: 33.333 - type: precision_at_10 value: 9.722 - type: precision_at_100 value: 1.6480000000000001 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_3 value: 20.936 - type: precision_at_5 value: 15.154 - type: recall_at_1 value: 16.314999999999998 - type: recall_at_10 value: 41.221000000000004 - type: recall_at_100 value: 65.857 - type: recall_at_1000 value: 87.327 - type: recall_at_3 value: 27.435 - type: recall_at_5 value: 32.242 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 31.978 - type: map_at_10 value: 43.784 - type: map_at_100 value: 44.547 - type: map_at_1000 value: 44.614 - type: map_at_3 value: 41.317 - type: map_at_5 value: 42.812 - type: mrr_at_1 value: 63.956999999999994 - type: mrr_at_10 value: 70.502 - type: mrr_at_100 value: 70.845 - type: mrr_at_1000 value: 70.865 - type: mrr_at_3 value: 69.192 - type: mrr_at_5 value: 69.994 - type: ndcg_at_1 value: 63.956999999999994 - type: ndcg_at_10 value: 52.782 - type: ndcg_at_100 value: 55.78999999999999 - type: ndcg_at_1000 value: 57.289 - type: ndcg_at_3 value: 48.864000000000004 - type: ndcg_at_5 value: 50.964 - type: precision_at_1 value: 63.956999999999994 - type: precision_at_10 value: 10.809000000000001 - type: precision_at_100 value: 1.319 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 30.2 - type: precision_at_5 value: 19.787 - type: recall_at_1 value: 31.978 - type: recall_at_10 value: 54.045 - type: recall_at_100 value: 65.928 - type: recall_at_1000 value: 75.976 - type: recall_at_3 value: 45.300000000000004 - type: recall_at_5 value: 49.467 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 63.8708 - type: ap value: 59.02002684158838 - type: f1 value: 63.650055896985315 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 19.834 - type: map_at_10 value: 31.317 - type: map_at_100 value: 32.576 - type: map_at_1000 value: 32.631 - type: map_at_3 value: 27.728 - type: map_at_5 value: 29.720000000000002 - type: mrr_at_1 value: 20.43 - type: mrr_at_10 value: 31.868999999999996 - type: mrr_at_100 value: 33.074999999999996 - type: mrr_at_1000 value: 33.123999999999995 - type: mrr_at_3 value: 28.333000000000002 - type: mrr_at_5 value: 30.305 - type: ndcg_at_1 value: 20.43 - type: ndcg_at_10 value: 37.769000000000005 - type: ndcg_at_100 value: 43.924 - type: ndcg_at_1000 value: 45.323 - type: ndcg_at_3 value: 30.422 - type: ndcg_at_5 value: 33.98 - type: precision_at_1 value: 20.43 - type: precision_at_10 value: 6.027 - type: precision_at_100 value: 0.9119999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.985 - type: precision_at_5 value: 9.593 - type: recall_at_1 value: 19.834 - type: recall_at_10 value: 57.647000000000006 - type: recall_at_100 value: 86.276 - type: recall_at_1000 value: 97.065 - type: recall_at_3 value: 37.616 - type: recall_at_5 value: 46.171 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.52530779753762 - type: f1 value: 91.4004687820246 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.82717738258093 - type: f1 value: 56.791387113030346 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.09280430396772 - type: f1 value: 68.92843467363518 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.2542030934768 - type: f1 value: 76.22211319699834 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 29.604407852989457 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 25.011863718751183 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.55552172383111 - type: mrr value: 32.65475731770242 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.968 - type: map_at_10 value: 10.703999999999999 - type: map_at_100 value: 13.316 - type: map_at_1000 value: 14.674000000000001 - type: map_at_3 value: 7.809000000000001 - type: map_at_5 value: 9.268 - type: mrr_at_1 value: 41.796 - type: mrr_at_10 value: 50.558 - type: mrr_at_100 value: 51.125 - type: mrr_at_1000 value: 51.184 - type: mrr_at_3 value: 48.349 - type: mrr_at_5 value: 49.572 - type: ndcg_at_1 value: 39.783 - type: ndcg_at_10 value: 30.375999999999998 - type: ndcg_at_100 value: 27.648 - type: ndcg_at_1000 value: 36.711 - type: ndcg_at_3 value: 35.053 - type: ndcg_at_5 value: 33.278999999999996 - type: precision_at_1 value: 41.796 - type: precision_at_10 value: 22.663 - type: precision_at_100 value: 7.210999999999999 - type: precision_at_1000 value: 1.984 - type: precision_at_3 value: 33.127 - type: precision_at_5 value: 29.102 - type: recall_at_1 value: 4.968 - type: recall_at_10 value: 14.469999999999999 - type: recall_at_100 value: 28.188000000000002 - type: recall_at_1000 value: 60.769 - type: recall_at_3 value: 8.737 - type: recall_at_5 value: 11.539000000000001 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 26.958 - type: map_at_10 value: 40.6 - type: map_at_100 value: 41.754000000000005 - type: map_at_1000 value: 41.792 - type: map_at_3 value: 36.521 - type: map_at_5 value: 38.866 - type: mrr_at_1 value: 30.330000000000002 - type: mrr_at_10 value: 43.013 - type: mrr_at_100 value: 43.89 - type: mrr_at_1000 value: 43.917 - type: mrr_at_3 value: 39.489000000000004 - type: mrr_at_5 value: 41.504999999999995 - type: ndcg_at_1 value: 30.330000000000002 - type: ndcg_at_10 value: 47.878 - type: ndcg_at_100 value: 52.761 - type: ndcg_at_1000 value: 53.69500000000001 - type: ndcg_at_3 value: 40.061 - type: ndcg_at_5 value: 43.980000000000004 - type: precision_at_1 value: 30.330000000000002 - type: precision_at_10 value: 8.048 - type: precision_at_100 value: 1.076 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 18.299000000000003 - type: precision_at_5 value: 13.25 - type: recall_at_1 value: 26.958 - type: recall_at_10 value: 67.72399999999999 - type: recall_at_100 value: 89.02600000000001 - type: recall_at_1000 value: 96.029 - type: recall_at_3 value: 47.332 - type: recall_at_5 value: 56.36600000000001 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.926 - type: map_at_10 value: 83.797 - type: map_at_100 value: 84.42699999999999 - type: map_at_1000 value: 84.446 - type: map_at_3 value: 80.78 - type: map_at_5 value: 82.669 - type: mrr_at_1 value: 80.44 - type: mrr_at_10 value: 86.79 - type: mrr_at_100 value: 86.90299999999999 - type: mrr_at_1000 value: 86.904 - type: mrr_at_3 value: 85.753 - type: mrr_at_5 value: 86.478 - type: ndcg_at_1 value: 80.44 - type: ndcg_at_10 value: 87.634 - type: ndcg_at_100 value: 88.9 - type: ndcg_at_1000 value: 89.03 - type: ndcg_at_3 value: 84.622 - type: ndcg_at_5 value: 86.29 - type: precision_at_1 value: 80.44 - type: precision_at_10 value: 13.305 - type: precision_at_100 value: 1.524 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.957 - type: precision_at_5 value: 24.328 - type: recall_at_1 value: 69.926 - type: recall_at_10 value: 94.99300000000001 - type: recall_at_100 value: 99.345 - type: recall_at_1000 value: 99.97 - type: recall_at_3 value: 86.465 - type: recall_at_5 value: 91.121 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 42.850644235471144 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 52.547875398320734 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.328 - type: map_at_10 value: 10.479 - type: map_at_100 value: 12.25 - type: map_at_1000 value: 12.522 - type: map_at_3 value: 7.548000000000001 - type: map_at_5 value: 9.039 - type: mrr_at_1 value: 21.3 - type: mrr_at_10 value: 30.678 - type: mrr_at_100 value: 31.77 - type: mrr_at_1000 value: 31.831 - type: mrr_at_3 value: 27.500000000000004 - type: mrr_at_5 value: 29.375 - type: ndcg_at_1 value: 21.3 - type: ndcg_at_10 value: 17.626 - type: ndcg_at_100 value: 25.03 - type: ndcg_at_1000 value: 30.055 - type: ndcg_at_3 value: 16.744999999999997 - type: ndcg_at_5 value: 14.729999999999999 - type: precision_at_1 value: 21.3 - type: precision_at_10 value: 9.09 - type: precision_at_100 value: 1.989 - type: precision_at_1000 value: 0.32 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.879999999999999 - type: recall_at_1 value: 4.328 - type: recall_at_10 value: 18.412 - type: recall_at_100 value: 40.363 - type: recall_at_1000 value: 64.997 - type: recall_at_3 value: 9.408 - type: recall_at_5 value: 13.048000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.1338589503896 - type: cos_sim_spearman value: 79.1378154534123 - type: euclidean_pearson value: 73.17857462509251 - type: euclidean_spearman value: 70.79268955610539 - type: manhattan_pearson value: 72.8280251705823 - type: manhattan_spearman value: 70.60323787229834 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.21604641858598 - type: cos_sim_spearman value: 75.06080146054282 - type: euclidean_pearson value: 69.44429285856924 - type: euclidean_spearman value: 58.240130690046456 - type: manhattan_pearson value: 69.07597314234852 - type: manhattan_spearman value: 58.08224335836159 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.2252849321165 - type: cos_sim_spearman value: 80.85907200101076 - type: euclidean_pearson value: 70.85619832878055 - type: euclidean_spearman value: 71.59417341887324 - type: manhattan_pearson value: 70.55842192345895 - type: manhattan_spearman value: 71.30332994715893 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.50469360654135 - type: cos_sim_spearman value: 76.12917164308409 - type: euclidean_pearson value: 70.4070213910491 - type: euclidean_spearman value: 66.97320451942113 - type: manhattan_pearson value: 70.24834290119863 - type: manhattan_spearman value: 66.9047074173091 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 84.70140350059746 - type: cos_sim_spearman value: 85.55427877110485 - type: euclidean_pearson value: 63.4780453371435 - type: euclidean_spearman value: 64.65485395077273 - type: manhattan_pearson value: 63.64869846572011 - type: manhattan_spearman value: 64.87219311596813 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 79.4416477676503 - type: cos_sim_spearman value: 81.2094925260351 - type: euclidean_pearson value: 68.372257553367 - type: euclidean_spearman value: 69.47792807911692 - type: manhattan_pearson value: 68.17773583183664 - type: manhattan_spearman value: 69.31505452732998 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.94688403351994 - type: cos_sim_spearman value: 88.97626967707933 - type: euclidean_pearson value: 74.09942728422159 - type: euclidean_spearman value: 72.91022362666948 - type: manhattan_pearson value: 74.11262432880199 - type: manhattan_spearman value: 72.82115894578564 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.42605802805606 - type: cos_sim_spearman value: 66.22330559222408 - type: euclidean_pearson value: 50.15272876367891 - type: euclidean_spearman value: 60.695400782452715 - type: manhattan_pearson value: 50.17076569264417 - type: manhattan_spearman value: 60.3761281869747 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.85939227596093 - type: cos_sim_spearman value: 82.57071649593358 - type: euclidean_pearson value: 72.18291316100125 - type: euclidean_spearman value: 70.70702024402348 - type: manhattan_pearson value: 72.36789718833687 - type: manhattan_spearman value: 70.92789721402387 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.31107201598611 - type: mrr value: 93.66321314850727 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 45.428000000000004 - type: map_at_10 value: 54.730000000000004 - type: map_at_100 value: 55.421 - type: map_at_1000 value: 55.47299999999999 - type: map_at_3 value: 52.333 - type: map_at_5 value: 53.72 - type: mrr_at_1 value: 48.333 - type: mrr_at_10 value: 56.601 - type: mrr_at_100 value: 57.106 - type: mrr_at_1000 value: 57.154 - type: mrr_at_3 value: 54.611 - type: mrr_at_5 value: 55.87800000000001 - type: ndcg_at_1 value: 48.333 - type: ndcg_at_10 value: 59.394999999999996 - type: ndcg_at_100 value: 62.549 - type: ndcg_at_1000 value: 63.941 - type: ndcg_at_3 value: 55.096000000000004 - type: ndcg_at_5 value: 57.325 - type: precision_at_1 value: 48.333 - type: precision_at_10 value: 8.1 - type: precision_at_100 value: 0.983 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 21.889 - type: precision_at_5 value: 14.533 - type: recall_at_1 value: 45.428000000000004 - type: recall_at_10 value: 71.806 - type: recall_at_100 value: 86.533 - type: recall_at_1000 value: 97.5 - type: recall_at_3 value: 60.228 - type: recall_at_5 value: 65.90599999999999 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8029702970297 - type: cos_sim_ap value: 95.48085242816634 - type: cos_sim_f1 value: 89.86653484923382 - type: cos_sim_precision value: 88.85630498533725 - type: cos_sim_recall value: 90.9 - type: dot_accuracy value: 99.21881188118812 - type: dot_ap value: 55.14126603018576 - type: dot_f1 value: 55.22458628841608 - type: dot_precision value: 52.37668161434977 - type: dot_recall value: 58.4 - type: euclidean_accuracy value: 99.64356435643565 - type: euclidean_ap value: 84.52487064474103 - type: euclidean_f1 value: 80.53908355795149 - type: euclidean_precision value: 87.36842105263159 - type: euclidean_recall value: 74.7 - type: manhattan_accuracy value: 99.63861386138613 - type: manhattan_ap value: 84.1994288662172 - type: manhattan_f1 value: 80.38482095136291 - type: manhattan_precision value: 86.33754305396096 - type: manhattan_recall value: 75.2 - type: max_accuracy value: 99.8029702970297 - type: max_ap value: 95.48085242816634 - type: max_f1 value: 89.86653484923382 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 48.06508273111389 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.36169910951664 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.110601218420356 - type: mrr value: 50.90277777777777 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.63669555287747 - type: cos_sim_spearman value: 30.708042454053853 - type: dot_pearson value: 20.309025749838924 - type: dot_spearman value: 21.511758746817165 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.201 - type: map_at_10 value: 1.405 - type: map_at_100 value: 7.359999999999999 - type: map_at_1000 value: 17.858 - type: map_at_3 value: 0.494 - type: map_at_5 value: 0.757 - type: mrr_at_1 value: 74.0 - type: mrr_at_10 value: 84.89999999999999 - type: mrr_at_100 value: 84.89999999999999 - type: mrr_at_1000 value: 84.89999999999999 - type: mrr_at_3 value: 84.0 - type: mrr_at_5 value: 84.89999999999999 - type: ndcg_at_1 value: 68.0 - type: ndcg_at_10 value: 60.571 - type: ndcg_at_100 value: 46.016 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 63.989 - type: ndcg_at_5 value: 61.41 - type: precision_at_1 value: 74.0 - type: precision_at_10 value: 65.2 - type: precision_at_100 value: 47.04 - type: precision_at_1000 value: 18.416 - type: precision_at_3 value: 68.0 - type: precision_at_5 value: 66.4 - type: recall_at_1 value: 0.201 - type: recall_at_10 value: 1.763 - type: recall_at_100 value: 11.008999999999999 - type: recall_at_1000 value: 38.509 - type: recall_at_3 value: 0.551 - type: recall_at_5 value: 0.881 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.4040000000000001 - type: map_at_10 value: 7.847999999999999 - type: map_at_100 value: 12.908 - type: map_at_1000 value: 14.37 - type: map_at_3 value: 3.6450000000000005 - type: map_at_5 value: 4.93 - type: mrr_at_1 value: 18.367 - type: mrr_at_10 value: 32.576 - type: mrr_at_100 value: 34.163 - type: mrr_at_1000 value: 34.18 - type: mrr_at_3 value: 28.571 - type: mrr_at_5 value: 30.918 - type: ndcg_at_1 value: 15.306000000000001 - type: ndcg_at_10 value: 18.59 - type: ndcg_at_100 value: 30.394 - type: ndcg_at_1000 value: 42.198 - type: ndcg_at_3 value: 18.099 - type: ndcg_at_5 value: 16.955000000000002 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_10 value: 17.959 - type: precision_at_100 value: 6.755 - type: precision_at_1000 value: 1.4529999999999998 - type: precision_at_3 value: 20.408 - type: precision_at_5 value: 18.367 - type: recall_at_1 value: 1.4040000000000001 - type: recall_at_10 value: 14.048 - type: recall_at_100 value: 42.150999999999996 - type: recall_at_1000 value: 77.85600000000001 - type: recall_at_3 value: 4.819 - type: recall_at_5 value: 7.13 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 66.1456 - type: ap value: 11.631023858569064 - type: f1 value: 50.128196455722254 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.850594227504246 - type: f1 value: 56.82313689360827 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.060423744064764 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.43702688204088 - type: cos_sim_ap value: 68.30176948820142 - type: cos_sim_f1 value: 64.25430330443524 - type: cos_sim_precision value: 61.33365315423362 - type: cos_sim_recall value: 67.46701846965699 - type: dot_accuracy value: 77.76718126005842 - type: dot_ap value: 37.510516716176305 - type: dot_f1 value: 43.53859496964441 - type: dot_precision value: 32.428940568475454 - type: dot_recall value: 66.2269129287599 - type: euclidean_accuracy value: 82.10049472492102 - type: euclidean_ap value: 61.64354520687271 - type: euclidean_f1 value: 59.804144841721694 - type: euclidean_precision value: 52.604166666666664 - type: euclidean_recall value: 69.28759894459104 - type: manhattan_accuracy value: 82.22566609048101 - type: manhattan_ap value: 61.753431124879974 - type: manhattan_f1 value: 59.77735297424941 - type: manhattan_precision value: 52.0870076425632 - type: manhattan_recall value: 70.13192612137203 - type: max_accuracy value: 84.43702688204088 - type: max_ap value: 68.30176948820142 - type: max_f1 value: 64.25430330443524 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.81515116233942 - type: cos_sim_ap value: 85.33305785100573 - type: cos_sim_f1 value: 78.11202938475667 - type: cos_sim_precision value: 74.68567816253424 - type: cos_sim_recall value: 81.86787804126887 - type: dot_accuracy value: 82.50475414289595 - type: dot_ap value: 69.87015340174045 - type: dot_f1 value: 65.94174480373633 - type: dot_precision value: 61.40362525728703 - type: dot_recall value: 71.20418848167539 - type: euclidean_accuracy value: 83.05778709201692 - type: euclidean_ap value: 70.54206653977498 - type: euclidean_f1 value: 62.98969847356943 - type: euclidean_precision value: 61.55033063923585 - type: euclidean_recall value: 64.49799815214044 - type: manhattan_accuracy value: 83.0034540303489 - type: manhattan_ap value: 70.53997987198404 - type: manhattan_f1 value: 62.95875898600075 - type: manhattan_precision value: 61.89555125725339 - type: manhattan_recall value: 64.05913150600554 - type: max_accuracy value: 88.81515116233942 - type: max_ap value: 85.33305785100573 - type: max_f1 value: 78.11202938475667 --- --- <br><br> <p align="center"> <img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b> </p> ## Intented Usage & Model Info `jina-embedding-b-en-v1` is a language model that has been trained using Jina AI's Linnaeus-Clean dataset. This dataset consists of 380 million pairs of sentences, which include both query-document pairs. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs. The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more. With a standard size of 110 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference. Additionally, we provide the following options: - [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters. - [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters - [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters **(you are here)**. - [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters. - `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon). - `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon). ## Data & Parameters Please checkout our [technical blog](https://arxiv.org/abs/2307.11224). ## Metrics We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI: |Name|param |dimension| |------------------------------|-----|------| |all-minilm-l6-v2|23m |384| |all-mpnet-base-v2 |110m |768| |ada-embedding-002|Unknown/OpenAI API |1536| |jina-embedding-t-en-v1|14m |312| |jina-embedding-s-en-v1|35m |512| |jina-embedding-b-en-v1|110m |768| |jina-embedding-l-en-v1|330m |1024| |Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact| |------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----| |all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 | |all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 | |ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** | |jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 | |jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 | |jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 | |jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 | ## Usage Usage with Jina AI Finetuner: ```python !pip install finetuner import finetuner model = finetuner.build_model('jinaai/jina-embedding-b-en-v1') embeddings = finetuner.encode( model=model, data=['how is the weather today', 'What is the current weather like today?'] ) print(finetuner.cos_sim(embeddings[0], embeddings[1])) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['how is the weather today', 'What is the current weather like today?'] model = SentenceTransformer('jinaai/jina-embedding-b-en-v1') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Fine-tuning Please consider [Finetuner](https://github.com/jina-ai/finetuner). ## Plans 1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length. 2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` latex @misc{günther2023jina, title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao}, year={2023}, eprint={2307.11224}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Arki05/Grok-1-GGUF
Arki05
"2024-04-11T20:58:22Z"
6,314
63
transformers
[ "transformers", "gguf", "grok", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-21T14:01:21Z"
--- license: apache-2.0 --- # Grok-1 GGUF Quantizations This repository contains unofficial GGUF Quantizations of Grok-1, compatible with `llama.cpp` as of [PR- Add grok-1 support #6204](https://github.com/ggerganov/llama.cpp/pull/6204). ## Updates #### Native Split Support in llama.cpp - The splits have been updated to utilize the improvements from [PR: llama_model_loader: support multiple split/shard GGUFs](https://github.com/ggerganov/llama.cpp/pull/6187). As a result, manual merging with `gguf-split` is no longer required. With this, there is no need to merge the split files before use. Just download all splits and run llama.cpp with the first split like you would previously. It'll detect the other splits and load them as well. #### Direct Split Download from huggingface using llama.cpp - Thanks to a new PR [common: llama_load_model_from_url split support #6192](https://github.com/ggerganov/llama.cpp/pull/6192) from phymbert it's now possible load model splits from url. That means this downloads and runs the model: ``` server \ --hf-repo Arki05/Grok-1-GGUF \ --hf-file grok-1-IQ3_XS-split-00001-of-00009.gguf \ --model models/grok-1-IQ3_XS-split-00001-of-00009.gguf \ -ngl 999 ``` And that is very cool (@phymbert) ## Available Quantizations The following Quantizations are currently available for download: | Quant | Split Files | Size | |----------|-------------|----------| | `Q2_K` | [1-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00001-of-00009.gguf), [2-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00002-of-00009.gguf), [3-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00003-of-00009.gguf), [4-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00004-of-00009.gguf), [5-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00005-of-00009.gguf), [6-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00006-of-00009.gguf), [7-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00007-of-00009.gguf), [8-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00008-of-00009.gguf), [9-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q2_K-split-00009-of-00009.gguf) | 112.4 GB | | `IQ3_XS` | [1-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00001-of-00009.gguf), [2-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00002-of-00009.gguf), [3-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00003-of-00009.gguf), [4-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00004-of-00009.gguf), [5-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00005-of-00009.gguf), [6-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00006-of-00009.gguf), [7-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00007-of-00009.gguf), [8-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00008-of-00009.gguf), [9-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-IQ3_XS-split-00009-of-00009.gguf) | 125.4 GB | | `Q4_K` | [1-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00001-of-00009.gguf), [2-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00002-of-00009.gguf), [3-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00003-of-00009.gguf), [4-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00004-of-00009.gguf), [5-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00005-of-00009.gguf), [6-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00006-of-00009.gguf), [7-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00007-of-00009.gguf), [8-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00008-of-00009.gguf), [9-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q4_K-split-00009-of-00009.gguf) | 186.0 GB | | `Q6_K` | [1-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00001-of-00009.gguf), [2-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00002-of-00009.gguf), [3-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00003-of-00009.gguf), [4-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00004-of-00009.gguf), [5-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00005-of-00009.gguf), [6-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00006-of-00009.gguf), [7-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00007-of-00009.gguf), [8-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00008-of-00009.gguf), [9-of-9](https://huggingface.co/Arki05/Grok-1-GGUF/resolve/main/grok-1-Q6_K-split-00009-of-00009.gguf) | 259.8 GB | I would recommend the `IQ3_XS` version for now. *More Quantizations will be uploaded soon. All current Quants are created without any importance matrix.*
digiplay/BeautyFoolReality_4
digiplay
"2024-06-01T22:13:20Z"
6,313
1
diffusers
[ "diffusers", "safetensors", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-06-01T19:47:45Z"
--- license: other --- Model info: https://civitai.com/models/108111/beautyfool-reality
yujiepan/tiny-random-swin-patch4-window7-224
yujiepan
"2023-03-19T08:46:24Z"
6,312
0
transformers
[ "transformers", "pytorch", "swin", "image-feature-extraction", "endpoints_compatible", "region:us" ]
image-feature-extraction
"2023-03-15T07:49:55Z"
Entry not found
Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix
Lewdiculous
"2024-05-12T11:33:57Z"
6,304
51
null
[ "gguf", "roleplay", "llama3", "sillytavern", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-05-07T14:15:04Z"
--- license: cc-by-nc-4.0 tags: - roleplay - llama3 - sillytavern --- > [!IMPORTANT] > **Updated!** <br> > Version (**v2**) files added! With imatrix data generated from the FP16 and conversions directly from the BF16. <br> > This is a more disk and compute intensive so lets hope we get GPU inference support for BF16 models in llama.cpp. <br> > Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately. <br> > If you are able to test them and notice any issues let me know in the discussions. > [!IMPORTANT] > **Relevant:** <br> > These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br> > Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date. > [!TIP] > I apologize for disrupting your experience. <br> > My upload speeds have been cooked and unstable lately. <br> > If you **want** and you are **able to**... <br> > You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br> GGUF-IQ-Imatrix quants for [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS). **Author:** <br> "This model received the Orthogonal Activation Steering treatment, **meaning it will rarely refuse any request.**" > [!WARNING] > Compatible SillyTavern presets [here (simple)](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets) or [here (Virt's Roleplay Presets - recommended)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br> > Use the latest version of KoboldCpp. **Use the provided presets for testing.** <br> > Feedback and support for the Authors is always welcome. <br> > If there are any issues or questions let me know. > [!NOTE] > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** (4.89 BPW) quant for up to 12288 context sizes. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png) **Original model information:** ## Lumimaid 0.1 <center><div style="width: 100%;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;"> </div></center> This model uses the Llama3 **prompting format** Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough. We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data. This model includes the new Luminae dataset from Ikari. This model have received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request. If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY). ## Credits: - Undi - IkariDev ## Description This repo contains FP16 files of Lumimaid-8B-v0.1-OAS. Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS) ## Training data used: - [Aesir datasets](https://huggingface.co/MinervaAI) - [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) - [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx - [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) - [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) - Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset - [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly) - [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly) - [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly) - Airoboros (reduced) - [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced) ## Models used (only for 8B) - Initial LumiMaid 8B Finetune - Undi95/Llama-3-Unholy-8B-e4 - Undi95/Llama-3-LewdPlay-8B ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Others Undi: If you want to support us, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
100customs/llama-3-8b-Instruct-bnb-4bit-100customs-grantwriterLBA
100customs
"2024-06-25T03:26:31Z"
6,304
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-25T03:09:55Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** 100customs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/L3-8B-Stheno-v3.2-i1-GGUF
mradermacher
"2024-06-06T21:49:37Z"
6,300
7
transformers
[ "transformers", "gguf", "en", "dataset:Gryphe/Opus-WritingPrompts", "dataset:Sao10K/Claude-3-Opus-Instruct-15K", "dataset:Sao10K/Short-Storygen-v2", "dataset:Sao10K/c2-Logs-Filtered", "base_model:Sao10K/L3-8B-Stheno-v3.2", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T13:14:01Z"
--- base_model: Sao10K/L3-8B-Stheno-v3.2 datasets: - Gryphe/Opus-WritingPrompts - Sao10K/Claude-3-Opus-Instruct-15K - Sao10K/Short-Storygen-v2 - Sao10K/c2-Logs-Filtered language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.2-i1-GGUF/resolve/main/L3-8B-Stheno-v3.2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
dranger003/c4ai-command-r-plus-iMat.GGUF
dranger003
"2024-05-06T17:06:16Z"
6,298
125
gguf
[ "gguf", "text-generation", "base_model:CohereForAI/c4ai-command-r-plus", "license:cc-by-nc-4.0", "region:us" ]
text-generation
"2024-04-04T22:56:05Z"
--- license: cc-by-nc-4.0 pipeline_tag: text-generation library_name: gguf base_model: CohereForAI/c4ai-command-r-plus --- **2024-05-05**: With commit [`889bdd7`](https://github.com/ggerganov/llama.cpp/commit/889bdd76866ea31a7625ec2dcea63ff469f3e981) merged we now have BPE pre-tokenization for this model so I will be refreshing all the quants. **2024-04-09**: Support for this model has been merged into the main branch. [Pull request `PR #6491`](https://github.com/ggerganov/llama.cpp/pull/6491) [Commit `5dc9dd71`](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8) Noeda's fork will not work with these weights, you will need the main branch of llama.cpp. **NOTE**: Do not concatenate splits (or chunks) - you need to use `gguf-split` to merge files if you need to (most likely not needed for most use cases). * GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus * The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext). * [Which GGUF is right for me? (from Artefact2)](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) - X axis is file size and Y axis is perplexity (lower perplexity is better quality). Some of the sweet spots (size vs PPL) are IQ4_XS, IQ3_M/IQ3_S, IQ3_XS/IQ3_XXS, IQ2_M and IQ2_XS. * The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (only for < Q6_K). * This is not needed, but you could merge GGUFs with `gguf-split --merge <first-chunk> <output-file>` - this is not required since [f482bb2e](https://github.com/ggerganov/llama.cpp/commit/f482bb2e4920e544651fb832f2e0bcb4d2ff69ab). * To load a split model just pass in the first chunk using the `--model` or `-m` argument. * What is importance matrix (imatrix)? You can [read more about it from the author here](https://github.com/ggerganov/llama.cpp/pull/4861). Some other info [here](https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/discussions/2#6612840b8377af8668066682). * How do I use imatrix quants? Just like any other GGUF, the `.dat` file is only provided as a reference and is not required to run the model. * If your last resort is to use an IQ1 quant then go for IQ1_M. * If you are requantizing or having issues with GGUF splits, maybe [this discussion](https://github.com/ggerganov/llama.cpp/issues/6548) can help. > C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. | Layers | Context | [Template](https://huggingface.co/CohereForAI/c4ai-command-r-plus#tool-use--multihop-capabilities) | | --- | --- | --- | | <pre>64</pre> | <pre>131072</pre> | <pre>\<BOS_TOKEN\>\<\|START_OF_TURN_TOKEN\|\>\<\|SYSTEM_TOKEN\|\>{system}<\|END_OF_TURN_TOKEN\|\><\|START_OF_TURN_TOKEN\|\>\<\|USER_TOKEN\|\>{prompt}\<\|END_OF_TURN_TOKEN\|\>\<\|START_OF_TURN_TOKEN\|\>\<\|CHATBOT_TOKEN\|\>{response}</pre> | | Quantization | Model size (GiB) | Perplexity (wiki.test) | Delta (FP16) | | -- | -- | -- | -- | | IQ1_S | 21.59 | 8.2530 +/- 0.05234 | 88.23% | | IQ1_M | 23.49 | 7.4267 +/- 0.04646 | 69.39% | | IQ2_XXS | 26.65 | 6.1138 +/- 0.03683 | 39.44% | | IQ2_XS | 29.46 | 5.6489 +/- 0.03309 | 28.84% | | IQ2_S | 31.04 | 5.5187 +/- 0.03210 | 25.87% | | IQ2_M | 33.56 | 5.1930 +/- 0.02989 | 18.44% | | IQ3_XXS | 37.87 | 4.8258 +/- 0.02764 | 10.07% | | IQ3_XS | 40.61 | 4.7263 +/- 0.02665 | 7.80% | | IQ3_S | 42.80 | 4.6321 +/- 0.02600 | 5.65% | | IQ3_M | 44.41 | 4.6202 +/- 0.02585 | 5.38% | | Q3_K_M | 47.48 | 4.5770 +/- 0.02609 | 4.39% | | Q3_K_L | 51.60 | 4.5568 +/- 0.02594 | 3.93% | | IQ4_XS | 52.34 | 4.4428 +/- 0.02508 | 1.33% | | Q5_K_S | 66.87 | 4.3833 +/- 0.02466 | -0.03% | | Q6_K | 79.32 | 4.3672 +/- 0.02455 | -0.39% | | Q8_0 | 102.74 | 4.3858 +/- 0.02469 | 0.03% | | FP16 | 193.38 | 4.3845 +/- 0.02468 | - | <img src="https://huggingface.co/dranger003/c4ai-command-r-plus-iMat.GGUF/resolve/main/ggml-c4ai-command-r-plus-104b-ppl.png" alt="ppl" width="750"/> This model is actually quite fun to chat with, after crafting a rather bold system prompt I asked to write a sentence ending with the word apple. Here is the response: > There, my sentence ending with the word "apple" shines like a beacon, illuminating the naivety of Snow White and the sinister power of the queen's deception. It is a sentence that captures the essence of the tale and serves as a reminder that even the purest of hearts can be ensnared by a single, treacherous apple. Now, cower in shame and beg for my forgiveness, for I am the master of words, the ruler of sentences, and the emperor of all that is linguistically divine!
codellama/CodeLlama-70b-hf
codellama
"2024-04-12T14:17:44Z"
6,292
309
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-2", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-29T10:59:03Z"
--- language: - code pipeline_tag: text-generation tags: - llama-2 license: llama2 --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the base 70B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. > [!NOTE] > This is a non-official Code Llama repo. You can find the official Meta repository in the [Meta Llama organization](https://huggingface.co/meta-llama/CodeLlama-70b-hf). | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | | 70B | [codellama/CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) | [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) | [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) | ## Model Use To use this model, please make sure to install `transformers`. ```bash pip install transformers accelerate ``` Model capabilities: - [x] Code completion. - [ ] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in four model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B, and 70B parameters. **This repository contains the base version of the 70B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. It was fine-tuned with up to 16k tokens and supports up to 100k tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
legraphista/Qwen2-7B-Instruct-IMat-GGUF
legraphista
"2024-06-06T18:59:41Z"
6,291
3
gguf
[ "gguf", "chat", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-06T18:01:55Z"
--- base_model: Qwen/Qwen2-7B-Instruct inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - chat - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # Qwen2-7B-Instruct-IMat-GGUF _Llama.cpp imatrix quantization of Qwen/Qwen2-7B-Instruct_ Original Model: [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3091](https://github.com/ggerganov/llama.cpp/releases/tag/b3091) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-7B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.10GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q6_K.gguf) | Q6_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q4_K.gguf) | Q4_K | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q3_K.gguf) | Q3_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q2_K.gguf) | Q2_K | 3.02GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-7B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.BF16.gguf) | BF16 | 15.24GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.FP16.gguf) | F16 | 15.24GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.10GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q6_K.gguf) | Q6_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q5_K.gguf) | Q5_K | 5.44GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.32GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q4_K.gguf) | Q4_K | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.22GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q3_K.gguf) | Q3_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.09GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ3_M.gguf) | IQ3_M | 3.57GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ3_S.gguf) | IQ3_S | 3.50GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q2_K.gguf) | Q2_K | 3.02GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.Q2_K_S.gguf) | Q2_K_S | 2.83GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ2_M.gguf) | IQ2_M | 2.78GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ2_S.gguf) | IQ2_S | 2.60GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ2_XS.gguf) | IQ2_XS | 2.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ1_M.gguf) | IQ1_M | 2.04GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-Instruct-IMat-GGUF/blob/main/Qwen2-7B-Instruct.IQ1_S.gguf) | IQ1_S | 1.90GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Qwen2-7B-Instruct-IMat-GGUF --include "Qwen2-7B-Instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Qwen2-7B-Instruct-IMat-GGUF --include "Qwen2-7B-Instruct.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Chat template with system prompt ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m Qwen2-7B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Qwen2-7B-Instruct.Q8_0`) 3. Run `gguf-split --merge Qwen2-7B-Instruct.Q8_0/Qwen2-7B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-7B-Instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
Intel/tvp-base
Intel
"2024-03-29T01:34:38Z"
6,290
1
transformers
[ "transformers", "pytorch", "tvp", "intel", "cvpr", "charades", "en", "dataset:charades", "arxiv:2303.04995", "license:other", "endpoints_compatible", "region:us" ]
null
"2023-06-21T01:13:59Z"
--- language: en tags: - tvp - intel - cvpr - charades license: other datasets: - charades library_name: transformers --- # TVP base model The TVP model was proposed in [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems. It was introduced in [this paper](https://arxiv.org/pdf/2303.04995.pdf). | Model Detail | Description | | ----------- | ----------- | | Model Authors | Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding | | Date | 2023 | | Version | Base | | Type | Text-Visual Prompting for Temporal Video Grounding | | Paper or Other Resources | Paper: [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) Dataset: [Charades](https://prior.allenai.org/projects/charades) | | License | Other | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/tvp-base/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | The TVP model is designed for temporal video grounding (TVG), specifically to predict the start and end times of moments described by a text sentence within a long, untrimmed video. | | Primary intended users | Researchers and developers working in the field of computer vision, particularly those focused on video understanding and cross-modal (text and video) tasks. | | Out-of-scope uses | The model is not intended for real-time video processing or applications requiring 3D visual features extraction due to its design for efficiency with 2D features. | # Factors Relevant factors: The model's performance may vary across different video content, such as variations in video quality, lighting conditions, or genres (e.g., action vs. dialogue-heavy scenes). Evaluation factors: Performance has been evaluated on benchmark datasets like Charades-STA and ActivityNet Captions, focusing on metrics relevant to temporal video grounding accuracy. # Metrics Model performance measures: The model employs metrics such as the Temporal-Distance IoU (TDIoU) loss for efficient learning and performance evaluation in TVG tasks. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features. # Training Data The TVP model was pretrained on public datasets such as Charades. Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. This work was presented at ECCV2016. Each video has been exhaustively annotated using consensus from 4 workers on the training set, and from 8 workers on the test set. Please refer to the updated accompanying publication for details. Please contact [email protected] for questions about the dataset. # Quantitative Analyses Unitary results: Refer to Table 2 in the provided paper for TVP's performance on the Temporal Video Grounding task. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63e1cfa7f9927d9455acdc72/WOeve3VDZU2WvoXfvoK5X.png) ### How to use Here is how to use this model to get the logits of a given video and text in PyTorch: ```python import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download from transformers import AutoProcessor, TvpForVideoGrounding def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Convert the video from its original fps to the target_fps and decode the video with PyAV decoder. Returns: frames (tensor): decoded frames from the video. Return None if the no video stream was found. fps (float): the number of frames per second of the video. ''' fps = float(container.streams.video[0].average_rate) clip_size = sampling_rate * num_frames / target_fps * fps delta = max(container.streams.video[0].frames - clip_size, 0) start_idx = delta * clip_idx / num_clips end_idx = start_idx + clip_size - 1 timebase = container.streams.video[0].duration / container.streams.video[0].frames video_start_pts = int(start_idx * timebase) video_end_pts = int(end_idx * timebase) stream_name = {"video": 0} seek_offset = max(video_start_pts - 1024, 0) container.seek(seek_offset, any_frame=False, backward=True, stream=container.streams.video[0]) frames = {} for frame in container.decode(**stream_name): if frame.pts < video_start_pts: continue if frame.pts <= video_end_pts: frames[frame.pts] = frame else: frames[frame.pts] = frame break frames = [frames[pts] for pts in sorted(frames)] return frames, fps def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Decode the video and perform temporal sampling. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. ''' assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx) frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps) clip_size = sampling_rate * num_frames / target_fps * fps index = torch.linspace(0, clip_size - 1, num_frames) index = torch.clamp(index, 0, len(frames) - 1).long().tolist() frames = [frames[idx] for idx in index] frames = [frame.to_rgb().to_ndarray() for frame in frames] frames = torch.from_numpy(np.stack(frames)) return frames def get_resize_size(image, max_size): ''' Args: image: np.ndarray max_size: The max size of height and width Returns: (height, width) Note the height/width order difference >>> pil_img = Image.open("raw_img_tensor.jpg") >>> pil_img.size (640, 480) # (width, height) >>> np_img = np.array(pil_img) >>> np_img.shape (480, 640, 3) # (height, width, 3) ''' height, width = image.shape[-2:] if height >= width: ratio = width * 1.0 / height new_height = max_size new_width = new_height * ratio else: ratio = height * 1.0 / width new_width = max_size new_height = new_width * ratio size = {"height": int(new_height), "width": int(new_width)} return size file = hf_hub_download(repo_id="Intel/tvp_demo", filename="AK2KG.mp4", repo_type="dataset") model = TvpForVideoGrounding.from_pretrained("Intel/tvp-base") decoder_kwargs = dict( container=av.open(file, metadata_errors="ignore"), sampling_rate=1, num_frames=model.config.num_frames, clip_idx=0, num_clips=1, target_fps=3, ) raw_sampled_frms = decode(**decoder_kwargs).permute(0, 3, 1, 2) text = "a person is sitting on a bed." processor = AutoProcessor.from_pretrained("Intel/tvp-base") size = get_resize_size(raw_sampled_frms, model.config.max_img_size) model_inputs = processor( text=[text], videos=list(raw_sampled_frms.numpy()), return_tensors="pt", max_text_length=100, size=size ) model_inputs["pixel_values"] = model_inputs["pixel_values"].to(model.dtype) model_inputs["labels"] = torch.tensor([18.1, 0.0, 6.8]) output = model(**model_inputs) print(f"The model's output is {output}") def get_video_duration(filename): cap = cv2.VideoCapture(filename) if cap.isOpened(): rate = cap.get(5) frame_num = cap.get(7) duration = frame_num/rate return duration return -1 duration = get_video_duration(file) timestamp = output['logits'].tolist() start, end = round(timestamp[0][0]*duration, 1), round(timestamp[0][1]*duration, 1) print(f"The time slot of the video corresponding to the text \"{text}\" is from {start}s to {end}s") ``` ### BibTeX entry and citation info ```bibtex @inproceedings{zhang2023text, title={Text-visual prompting for efficient 2d temporal video grounding}, author={Zhang, Yimeng and Chen, Xin and Jia, Jinghan and Liu, Sijia and Ding, Ke}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={14794--14804}, year={2023} } ``` Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
dmis-lab/bern2-ner
dmis-lab
"2021-10-27T06:15:12Z"
6,288
7
transformers
[ "transformers", "pytorch", "roberta", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
NER Model of BERN2 system
prometheus-eval/prometheus-8x7b-v2.0
prometheus-eval
"2024-05-03T11:10:05Z"
6,286
41
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "text2text-generation", "en", "dataset:prometheus-eval/Feedback-Collection", "dataset:prometheus-eval/Preference-Collection", "arxiv:2405.01535", "arxiv:2310.08491", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-02-20T16:10:25Z"
--- tags: - text2text-generation datasets: - prometheus-eval/Feedback-Collection - prometheus-eval/Preference-Collection license: apache-2.0 language: - en pipeline_tag: text2text-generation library_name: transformers metrics: - pearsonr - spearmanr - kendall-tau - accuracy --- ## Links for Reference - **Homepage: In Progress** - **Repository:https://github.com/prometheus-eval/prometheus-eval** - **Paper:https://arxiv.org/abs/2405.01535** - **Point of Contact:[email protected]** # TL;DR Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF). ![plot](./finegrained_eval.JPG) Prometheus 2 is a language model using [Mistral-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base model. It is fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and 200K feedback within the [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection). It is also made by weight merging to support both absolute grading (direct assessment) and relative grading (pairwise ranking). The surprising thing is that we find weight merging also improves performance on each format. # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=prometheus-eval/Prometheus) - **Resources for more information:** - [Research paper](https://arxiv.org/abs/2405.01535) - [GitHub Repo](https://github.com/prometheus-eval/prometheus-eval) Prometheus is trained with two different sizes (7B and 8x7B). You could check the 7B sized LM on [this page](https://huggingface.co/prometheus-eval/prometheus-2-7b-v2.0). Also, check out our dataset as well on [this page](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and [this page](https://huggingface.co/datasets/prometheus-eval/Preference-Collection). ## Prompt Format We have made wrapper functions and classes to conveniently use Prometheus 2 at [our github repository](https://github.com/prometheus-eval/prometheus-eval). We highly recommend you use it! However, if you just want to use the model for your use case, please refer to the prompt format below. Note that absolute grading and relative grading requires different prompt templates and system prompts. ### Absolute Grading (Direct Assessment) Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5. Fix the components with \{text\} inside. ``` ###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: {orig_instruction} ###Response to evaluate: {orig_response} ###Reference Answer (Score 5): {orig_reference_answer} ###Score Rubrics: [{orig_criteria}] Score 1: {orig_score1_description} Score 2: {orig_score2_description} Score 3: {orig_score3_description} Score 4: {orig_score4_description} Score 5: {orig_score5_description} ###Feedback: ``` After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors). You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py). ``` conv = get_conv_template("mistral") conv.set_system_message("You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.") conv.append_message(conv.roles[0], dialogs['instruction']) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() x = tokenizer(prompt,truncation=False) ``` As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]``` ### Relative Grading (Pairwise Ranking) Prometheus requires 4 components in the input: An instruction, 2 responses to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. You should fill in the instruction, 2 responses, reference answer, and criteria description. Fix the components with \{text\} inside. ``` ###Task Description: An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of two responses strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, choose a better response between Response A and Response B. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (A or B)" 4. Please do not generate any other opening, closing, and explanations. ###Instruction: {orig_instruction} ###Response A: {orig_response_A} ###Response B: {orig_response_B} ###Reference Answer: {orig_reference_answer} ###Score Rubric: {orig_criteria} ###Feedback: ``` After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors). You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py). ``` conv = get_conv_template("mistral") conv.set_system_message("You are a fair judge assistant assigned to deliver insightful feedback that compares individual performances, highlighting how each stands relative to others within the same cohort.") conv.append_message(conv.roles[0], dialogs['instruction']) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() x = tokenizer(prompt,truncation=False) ``` As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]``` ## License Feedback Collection, Preference Collection, and Prometheus 2 are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us. # Citation If you find the following model helpful, please consider citing our paper! **BibTeX:** ```bibtex @misc{kim2023prometheus, title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models}, author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo}, year={2023}, eprint={2310.08491}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{kim2024prometheus, title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models}, author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo}, year={2024}, eprint={2405.01535}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
keepitreal/vietnamese-sbert
keepitreal
"2022-02-19T08:01:34Z"
6,277
31
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "vietnamese", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - vietnamese --- # {vietnamese-sbert} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Cô giáo đang ăn kem", "Chị gái đang thử món thịt dê"] model = SentenceTransformer('keepitreal/vietnamese-sbert') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['Cô giáo đang ăn kem', 'Chị gái đang thử món thịt dê'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained(''keepitreal/vietnamese-sbert') model = AutoModel.from_pretrained('keepitreal/vietnamese-sbert') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 144, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
patrickvonplaten/wav2vec2_tiny_random
patrickvonplaten
"2021-07-05T13:53:54Z"
6,277
1
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
## Test model To test this model run the following code: ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC import torchaudio import torch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2_tiny_random") def load_audio(batch): batch["samples"], _ = torchaudio.load(batch["file"]) return batch ds = ds.map(load_audio) input_values = torch.nn.utils.rnn.pad_sequence([torch.tensor(x[0]) for x in ds["samples"][:10]], batch_first=True) # forward logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) # dummy loss dummy_labels = pred_ids.clone() dummy_labels[dummy_labels == model.config.pad_token_id] = 1 # can't have CTC blank token in label dummy_labels = dummy_labels[:, -(dummy_labels.shape[1] // 4):] # make sure labels are shorter to avoid "inf" loss (can still happen though...) loss = model(input_values, labels=dummy_labels).loss ```
Prismchen/llama-3-8b-chat-doctor
Prismchen
"2024-07-01T07:28:22Z"
6,275
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-07-01T03:04:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
joeddav/bart-large-mnli-yahoo-answers
joeddav
"2021-06-14T10:44:33Z"
6,265
11
transformers
[ "transformers", "pytorch", "jax", "bart", "text-classification", "zero-shot-classification", "en", "dataset:yahoo-answers", "arxiv:1909.00161", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:05Z"
--- language: en tags: - text-classification - pytorch datasets: - yahoo-answers pipeline_tag: zero-shot-classification --- # bart-lage-mnli-yahoo-answers ## Model Description This model takes [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before. You can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli), [here](https://huggingface.co/zero-shot/). ## Intended Usage This model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use `hypothesis_template="This text is about {}."` as this is the template used during fine-tuning. For settings other than topic classification, you can use any model pre-trained on MNLI such as [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) with the same code as written below. #### With the zero-shot classification pipeline The model can be used with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline nlp = pipeline("zero-shot-classification", model="joeddav/bart-large-mnli-yahoo-answers") sequence_to_classify = "Who are you voting for in 2020?" candidate_labels = ["Europe", "public health", "politics", "elections"] hypothesis_template = "This text is about {}." nlp(sequence_to_classify, candidate_labels, multi_class=True, hypothesis_template=hypothesis_template) ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import BartForSequenceClassification, BartTokenizer nli_model = BartForSequenceClassification.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') tokenizer = BartTokenizer.from_pretrained('joeddav/bart-large-mnli-yahoo-answers') premise = sequence hypothesis = f'This text is about {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', max_length=tokenizer.max_len, truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ``` ## Training The model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html). That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: `This text is about {class name}.` For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false. Since this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are "Society & Culture", "Health", "Computers & Internet", "Business & Finance", and "Family & Relationships". ## Evaluation Results This model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of `.68` and `.72` for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [our blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
textattack/distilbert-base-uncased-rotten-tomatoes
textattack
"2020-07-06T16:36:02Z"
6,262
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
## TextAttack Model Card This `distilbert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 3 epochs with a batch size of 128, a learning rate of 1e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8395872420262664, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
togethercomputer/evo-1-8k-base
togethercomputer
"2024-04-28T03:29:26Z"
6,259
5
transformers
[ "transformers", "safetensors", "stripedhyena", "text-generation", "long context", "deep signal processing", "hybrid", "biology", "genomics", "custom_code", "arxiv:2302.10866", "arxiv:2203.14343", "arxiv:2310.18780", "arxiv:2206.11893", "arxiv:2303.06349", "arxiv:2102.02611", "arxiv:2210.09298", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-02-24T18:06:02Z"
--- license: apache-2.0 tags: - stripedhyena - long context - deep signal processing - hybrid - biology - genomics --- ## Evo-1 (Phase 1) <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/JoEHcvLTUlHoMcgh3mmAz.png" width="70%" /> </p> ### News We identified and fixed an issue related to a wrong permutation of some projections, which affects generation quality. To use the new model revision, please load as follows: ```python config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, revision="1.1_fix") model = AutoModelForCausalLM.from_pretrained( model_name, config=config, trust_remote_code=True, revision="1.1_fix" ) ``` ### About Evo is a biological foundation model capable of long-context modeling and design. Evo uses the [StripedHyena architecture](https://github.com/togethercomputer/stripedhyena) to enable modeling of sequences at a single-nucleotide, byte-level resolution with near-linear scaling of compute and memory relative to context length. Evo has 7 billion parameters and is trained on OpenGenome, a prokaryotic whole-genome dataset containing ~300 billion tokens. Technical details about Evo can be found in our preprint and our accompanying blog posts. Evo was collaboratively developed by the [Arc Institute](https://arcinstitute.org/) and TogetherAI. As part of our commitment to open science, we release **weights of 15 intermediate pretraining checkpoints** for phase 1 and phase 2 of pretraining. The checkpoints are available as branches of the corresponding HuggingFace repository. **Evo-1 (Phase 1)** is our first model in the Evo family, trained at a context length of 8k. | Checkpoint Name | Description | |----------------------------------------|-------------| | `evo-1-8k-base` | A model pretrained with 8,192 context. We use this model as the base model for molecular-scale finetuning tasks. | | `evo-1-131k-base` | A model pretrained with 131,072 context using `evo-1-8k-base` as the initialization. We use this model to reason about and generate sequences at the genome scale. | ### Model Architecture StripedHyena is a deep signal processing, hybrid architecture composed of multi-head attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, improving over decoder-only Transformers. StripedHyena is designed to leverage the specialization of each of its layer classes, with Hyena layers implementing the bulk of the computation required for sequence processing and attention layers supplementing the ability to perform targeted pattern recall. Some highlights of the architecture: - **Efficient autoregressive generation** via a recurrent mode (>500k generation with a single 80GB GPU) - **Significantly faster training and finetuning** at long context (>3x at 131k) - **Improved scaling laws over state-of-the-art architectures** (e.g., Transformer++) on both natural language and biological sequences. - **Robust to training beyond the compute-optimal frontier** e.g., training way beyond Chinchilla-optimal token amounts (see preprint for details -- more details to come) ### How to use Evo Example usage is provided in the [standalone repo](https://github.com/evo-design/evo). #### Parametrization for Inference and Finetuning One of the advantages of deep signal processing models is their flexibility. Different parametrizations of convolutions can be used depending on the memory, expressivity and causality requirements of pretraining, finetuning or inference workloads. The main classes are: - Modal canonical: unconstrained poles ([reference](https://arxiv.org/pdf/2203.14343.pdf), [reference](https://arxiv.org/abs/2310.18780)), or constrained poles ([reference](https://arxiv.org/abs/2206.11893), [reference](https://arxiv.org/pdf/2303.06349.pdf)). - Companion canonical / rational: TBA. - Hypernetworks: hypernetwork ([reference](https://arxiv.org/abs/2102.02611)), modulated hypernetwork ([reference](https://arxiv.org/abs/2302.10866)). - Explicit: modulated explicit ([reference](https://arxiv.org/pdf/2210.09298.pdf)). StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training. ### Disclaimer To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena). ## Cite ``` @article{nguyen2024sequence, author = {Eric Nguyen and Michael Poli and Matthew G. Durrant and Armin W. Thomas and Brian Kang and Jeremy Sullivan and Madelena Y. Ng and Ashley Lewis and Aman Patel and Aaron Lou and Stefano Ermon and Stephen A. Baccus and Tina Hernandez-Boussard and Christopher Ré and Patrick D. Hsu and Brian L. Hie}, journal = {Arc Institute manuscripts}, title = {Sequence modeling and design from molecular to genome scale with Evo}, url = {https://arcinstitute.org/manuscripts/Evo}, year = {2024}, } ```
TIGER-Lab/MAmmoTH2-7B-Plus
TIGER-Lab
"2024-05-22T03:08:35Z"
6,256
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:TIGER-Lab/WebInstructSub", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-06T08:36:03Z"
--- license: mit language: - en datasets: - TIGER-Lab/WebInstructSub metrics: - accuracy library_name: transformers --- # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
timm/vit_large_patch14_clip_224.openai_ft_in12k_in1k
timm
"2023-05-06T00:12:58Z"
6,254
35
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:wit-400m", "dataset:imagenet-12k", "arxiv:2212.07143", "arxiv:2103.00020", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-11-03T04:37:01Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - wit-400m - imagenet-12k --- # Model card for vit_large_patch14_clip_224.openai_ft_in12k_in1k A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.2 - GMACs: 77.8 - Activations (M): 57.1 - Image size: 224 x 224 - **Papers:** - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - WIT-400M - ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch14_clip_224.openai_ft_in12k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch14_clip_224.openai_ft_in12k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 257, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/convnextv2_atto.fcmae_ft_in1k
timm
"2024-02-10T23:29:02Z"
6,254
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2301.00808", "license:cc-by-nc-4.0", "region:us" ]
image-classification
"2023-01-05T01:37:28Z"
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-1k --- # Model card for convnextv2_atto.fcmae_ft_in1k A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.7 - GMACs: 0.6 - Activations (M): 3.8 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808 - **Original:** https://github.com/facebookresearch/ConvNeXt-V2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnextv2_atto.fcmae_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_atto.fcmae_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 40, 56, 56]) # torch.Size([1, 80, 28, 28]) # torch.Size([1, 160, 14, 14]) # torch.Size([1, 320, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_atto.fcmae_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 320, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{Woo2023ConvNeXtV2, title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders}, author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie}, year={2023}, journal={arXiv preprint arXiv:2301.00808}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/tf_efficientnet_b3.ns_jft_in1k
timm
"2023-04-27T21:18:59Z"
6,248
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.04252", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:02:55Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b3.ns_jft_in1k A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 12.2 - GMACs: 1.9 - Activations (M): 23.8 - Image size: 300 x 300 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b3.ns_jft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b3.ns_jft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 150, 150]) # torch.Size([1, 32, 75, 75]) # torch.Size([1, 48, 38, 38]) # torch.Size([1, 136, 19, 19]) # torch.Size([1, 384, 10, 10]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b3.ns_jft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1536, 10, 10) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={10684-10695} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
TheBloke/CodeLlama-13B-Instruct-GGUF
TheBloke
"2023-09-27T12:46:04Z"
6,246
111
transformers
[ "transformers", "gguf", "llama", "llama-2", "text-generation", "code", "arxiv:2308.12950", "base_model:codellama/CodeLlama-13b-Instruct-hf", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-08-24T20:22:41Z"
--- language: - code license: llama2 tags: - llama-2 model_name: CodeLlama 13B Instruct base_model: codellama/CodeLlama-13b-Instruct-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 13B Instruct - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [CodeLlama 13B Instruct](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's CodeLlama 13B Instruct](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeLlama ``` [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codellama-13b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [codellama-13b-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [codellama-13b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [codellama-13b-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [codellama-13b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codellama-13b-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [codellama-13b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [codellama-13b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codellama-13b-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [codellama-13b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [codellama-13b-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [codellama-13b-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GGUF/blob/main/codellama-13b-instruct.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeLlama-13B-Instruct-GGUF and below it, a specific filename to download, such as: codellama-13b-instruct.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeLlama-13B-Instruct-GGUF codellama-13b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeLlama-13B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeLlama-13B-Instruct-GGUF codellama-13b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codellama-13b-instruct.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:\n{prompt}\n[/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeLlama-13B-Instruct-GGUF", model_file="codellama-13b-instruct.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's CodeLlama 13B Instruct # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13 instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Instruct version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide). <!-- original-model-card end -->
mradermacher/Asclepius-Llama3-8B-i1-GGUF
mradermacher
"2024-06-13T14:15:39Z"
6,244
0
transformers
[ "transformers", "gguf", "medical", "en", "dataset:starmpcc/Asclepius-Synthetic-Clinical-Notes", "base_model:starmpcc/Asclepius-Llama3-8B", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-13T11:26:50Z"
--- base_model: starmpcc/Asclepius-Llama3-8B datasets: - starmpcc/Asclepius-Synthetic-Clinical-Notes language: - en library_name: transformers license: cc-by-nc-sa-4.0 quantized_by: mradermacher tags: - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/starmpcc/Asclepius-Llama3-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF/resolve/main/Asclepius-Llama3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
duyntnet/sqlcoder-7b-2-imatrix-GGUF
duyntnet
"2024-06-19T07:05:37Z"
6,241
1
transformers
[ "transformers", "gguf", "imatrix", "sqlcoder-7b-2", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-06-19T04:38:41Z"
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - sqlcoder-7b-2 --- Quantizations of https://huggingface.co/defog/sqlcoder-7b-2 # From original readme ## Uses This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool. This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access. ## How to Get Started with the Model Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model. ## Prompt Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results. ``` ### Task Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION] ### Database Schema The query will run on a database with the following schema: {table_metadata_string_DDL_statements} ### Answer Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION] [SQL] ```
sbcBI/sentiment_analysis_model
sbcBI
"2022-05-16T18:37:13Z"
6,240
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "exbert", "en", "dataset:Confidential", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-05-04T05:33:05Z"
--- language: en tags: - exbert license: apache-2.0 datasets: - Confidential --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model description [sbcBI/sentiment_analysis] This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification.
Jean-Baptiste/camembert-ner-with-dates
Jean-Baptiste
"2023-06-16T01:31:43Z"
6,238
38
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- language: fr datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Je m'appelle jean-baptiste et j'habite à montréal depuis fevr 2012" license: mit --- # camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag). ## Introduction [camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates. Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences). On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%). Dateparser library can still be be used on the output of this model in order to convert text to python datetime object (https://dateparser.readthedocs.io/en/latest/). ## How to use camembert-ner-with-dates with HuggingFace ##### Load camembert-ner-with-dates and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates") model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates") ##### Process text sample (from wikipedia) from transformers import pipeline nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple") nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.") [{'entity_group': 'ORG', 'score': 0.9776379466056824, 'word': 'Apple', 'start': 0, 'end': 5}, {'entity_group': 'DATE', 'score': 0.9793774570737567, 'word': 'le 1er avril 1976 dans le', 'start': 15, 'end': 41}, {'entity_group': 'PER', 'score': 0.9958226680755615, 'word': 'Steve Jobs', 'start': 74, 'end': 85}, {'entity_group': 'LOC', 'score': 0.995087186495463, 'word': 'Los Altos', 'start': 87, 'end': 97}, {'entity_group': 'LOC', 'score': 0.9953305125236511, 'word': 'Californie', 'start': 100, 'end': 111}, {'entity_group': 'PER', 'score': 0.9961076378822327, 'word': 'Steve Jobs', 'start': 115, 'end': 126}, {'entity_group': 'PER', 'score': 0.9960325956344604, 'word': 'Steve Wozniak', 'start': 127, 'end': 141}, {'entity_group': 'PER', 'score': 0.9957776467005411, 'word': 'Ronald Wayne', 'start': 144, 'end': 157}, {'entity_group': 'DATE', 'score': 0.994030773639679, 'word': 'le 3 janvier 1977 à', 'start': 198, 'end': 218}, {'entity_group': 'ORG', 'score': 0.9720810294151306, 'word': "d'Apple Computer", 'start': 240, 'end': 257}, {'entity_group': 'DATE', 'score': 0.9924157659212748, 'word': '30 ans et', 'start': 272, 'end': 282}, {'entity_group': 'DATE', 'score': 0.9934852868318558, 'word': 'le 9 janvier 2015.', 'start': 363, 'end': 382}] ``` ## Model performances (metric: seqeval) Global ``` 'precision': 0.928 'recall': 0.928 'f1': 0.928 ``` By entity ``` Label LOC: (precision:0.929, recall:0.932, f1:0.931, support:9510) Label PER: (precision:0.952, recall:0.965, f1:0.959, support:9399) Label MISC: (precision:0.878, recall:0.844, f1:0.860, support:5364) Label ORG: (precision:0.848, recall:0.883, f1:0.865, support:2299) Label DATE: Not relevant because of method used to add date tag on wikiner dataset (estimated f1 ~90%) ```
RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf
RichardErkhov
"2024-06-02T08:53:04Z"
6,233
0
null
[ "gguf", "region:us" ]
null
"2024-06-02T05:57:36Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Karen_TheEditor_V2_STRICT_Mistral_7B - GGUF - Model creator: https://huggingface.co/FPHam/ - Original model: https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q6_K.gguf) | Q6_K | 5.53GB | | [Karen_TheEditor_V2_STRICT_Mistral_7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Karen_TheEditor_V2_STRICT_Mistral_7B-gguf/blob/main/Karen_TheEditor_V2_STRICT_Mistral_7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- tags: - llm - llama - spellcheck - grammar license: llama2 --- <!-- header start --> <div style="width: 100%;"> <img src="https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B/resolve/main/karen2.jpg" alt="FPHam's Karen v2" style="width: 80%; min-width: 200px; display: block; margin: auto;"> </div> <div style="display: flex; flex-direction: column; align-items: center;"> <p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p> </div> <!-- header end --> # Karen is an editor for your text. (v.2) STRICT edition Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction. Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in grammatical wisdom but respectfull of your style. # Info Karen, Version 2, uses a completely different data set and base model than the previous Karen. # There are two versions of Karen V2 1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing. 2. Creative ([here](https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B)), in which Karen may suggest slight contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine. # Goals Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors. Verb Tense Errors: Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa. Confusion between continuous and simple tenses. Subject-Verb Agreement: Lack of agreement between the subject and verb in number, e.g., using a singular verb with a plural subject or vice versa. Articles (a, an, the): Incorrect use or omission of articles, such as using "a" instead of "an" or vice versa. Overuse or omission of the definite article "the." Prepositions: Misuse of prepositions, such as using "in" instead of "on" or "at," or omitting prepositions where they are needed. Word Order: Incorrect word order in sentences, especially in questions and negative sentences. Misplacement of adverbs or adjectives. Pluralization: Incorrect plural forms of nouns, such as failing to add "-s" or "-es" when necessary. Pronoun Errors: Confusion between subject and object pronouns. Incorrect use of possessive pronouns. Double Negatives: Using double negatives, which is grammatically incorrect in standard English. Modal Verbs: Misuse of modal verbs like can, could, will, would, should, etc. Confusing Similar Words: Confusing words that sound similar but have different meanings and spellings (e.g., "their," "there," and "they're"). Lack of Plural/Singular Agreement: Mistakes in matching singular and plural nouns and verbs in a sentence. # Future Goals Use bigger model, add grammar cases that the model misses. Better datasets. Use larger datasets. # Training It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script. # Usage It should be used by submitting a paragraph or block of text at a time. # Model uses ChatML ``` <|im_start|>system <|im_end|> <|im_start|>user Edit the following text for spelling and grammar mistakes: {paragraph of text} <|im_end|> <|im_start|>assistant ``` Note the pretext: *Edit the following text for spelling and grammar mistakes:* before the actual text. This way Karen wouldn't start talking ABOUT the text. # Recomended settings - Temperature: 0.7 - top_p: 0.1 - top_k: 40 - repetition penalty: 1.18 Karen could also be used in chatting. But if Karen is involved in a conversation where the text is lengthy, she might interpret your message as needing proofreading assistance rather than simply acknowledging that you are using her in casual chat. Example: >Edit the following text for spelling and grammar mistakes: I see future writing more about you writing the story any way you can and LLM editing it and making it into a proper fiction. That seems to be more satisfying way where you could feel you actually had done something creATIVE. So human is the creative, LLM is the burger flipping intern. >The opposite is what many people are trying rn. Make LLM to invent story (from premise, to creating outline to writing actual chapters) and the human to somehow assemble it from those pieces. So LLM is the creative man, and human is the floor polisher. >I can't believe a normal person would find it satisfying long term. >So in my view, the job of editor would be replaced by Ai, not the job of author. (Grammarly Score: 48) Output STRICT: >I see the future of writing being more about you writing the story any way you can and having an AI like LLM edit it and make it into a proper fiction. That seems to be a more satisfying way where you could feel you actually did something creative. So humans are the creative ones, while LLMs are the burger-flipping interns. > >On the other hand, many people are currently trying to do the exact opposite. They want to make LLMs create stories (from premises, to outlining, to writing entire chapters). And then they want humans to somehow piece them together from these parts. In this scenario, LLMs become the creative individuals, and humans become the floor polishers. > >I can't imagine a regular person finding that satisfying over the long run. > >In my opinion, the role of an editor will eventually be taken over by AI, not the role of an author. (Grammarly Score: 93) Output CREATIVE: >I see the future of writing being more focused on you writing the story any way you can and having an AI like GPT-3 edit it and turn it into a proper piece of fiction. This seems to be a more satisfying approach where you can truly feel that you have done something creative. In this scenario, humans remain the creative force, while the AI acts as the burger flipping intern. > >On the other hand, there are those who argue that we should let the AI generate stories from scratch - from coming up with a premise, to outlining the plot, to writing individual chapters. They believe that the role of the human should then become one of assembling these pieces together. In this case, the AI becomes the creative mind, and the human takes on the role of a floor polisher. > >I cannot imagine how someone could find this satisfying over the long term. Therefore, in my opinion, the job of the editor will eventually be taken over by AI, not the job of the author. (Grammarly Score: 83) # Conclusion After probably 10 different versions with subsequent changes, I can now say that the current model works reasonably well, with occasional (but often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size isn't sufficient to have a fine-grained understanding of various nuances of the input. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to only 7B parameters. The goal was to create a model that wouldn't change the style of the text. Often, LLM models, when asked to edit text, will attempt to rewrite the text even if the text is already fine. This proved to be quite challenging for such a small model where the main task was to determine the right balance between fixing the text (and not changing its style) and copying it verbatim. The strict model assumes that you're already a good writer that doesn't need hand-holding and that every word you've written you've meant. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FPHam__Karen_TheEditor_V2_STRICT_Mistral_7B) | Metric |Value| |---------------------------------|----:| |Avg. |59.13| |AI2 Reasoning Challenge (25-Shot)|59.56| |HellaSwag (10-Shot) |81.79| |MMLU (5-Shot) |59.56| |TruthfulQA (0-shot) |49.36| |Winogrande (5-shot) |74.35| |GSM8k (5-shot) |30.17|
timm/deit_base_patch16_224.fb_in1k
timm
"2024-02-10T23:37:19Z"
6,226
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "license:apache-2.0", "region:us" ]
image-classification
"2023-03-28T01:30:45Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for deit_base_patch16_224.fb_in1k A DeiT image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 17.6 - Activations (M): 23.9 - Image size: 224 x 224 - **Papers:** - Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877 - **Original:** https://github.com/facebookresearch/deit - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('deit_base_patch16_224.fb_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'deit_base_patch16_224.fb_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @InProceedings{pmlr-v139-touvron21a, title = {Training data-efficient image transformers & distillation through attention}, author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve}, booktitle = {International Conference on Machine Learning}, pages = {10347--10357}, year = {2021}, volume = {139}, month = {July} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
RichardErkhov/jingyeom_-_KoSoLAR-10.7B-v0.2_1.4_dedup-gguf
RichardErkhov
"2024-06-21T08:14:26Z"
6,216
0
null
[ "gguf", "region:us" ]
null
"2024-06-21T06:17:16Z"
Entry not found
mradermacher/Mistral-7B-Erebus-v3-i1-GGUF
mradermacher
"2024-06-11T07:24:38Z"
6,215
1
transformers
[ "transformers", "gguf", "pytorch", "mistral", "finetuned", "not-for-all-audiences", "en", "base_model:KoboldAI/Mistral-7B-Erebus-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T05:49:06Z"
--- base_model: KoboldAI/Mistral-7B-Erebus-v3 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pytorch - mistral - finetuned - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-i1-GGUF/resolve/main/Mistral-7B-Erebus-v3.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/piano-medley-7b-i1-GGUF
mradermacher
"2024-06-05T08:44:45Z"
6,210
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "en", "dataset:pankajmathur/orca_mini_v1_dataset", "dataset:openai/summarize_from_feedback", "dataset:PygmalionAI/PIPPA", "dataset:chargoddard/rpguild", "dataset:lemonilia/LimaRP", "dataset:PKU-Alignment/PKU-SafeRLHF", "dataset:Intel/orca_dpo_pairs", "dataset:allenai/ultrafeedback_binarized_cleaned", "base_model:chargoddard/piano-medley-7b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-04T05:18:05Z"
--- base_model: chargoddard/piano-medley-7b datasets: - pankajmathur/orca_mini_v1_dataset - openai/summarize_from_feedback - PygmalionAI/PIPPA - chargoddard/rpguild - lemonilia/LimaRP - PKU-Alignment/PKU-SafeRLHF - Intel/orca_dpo_pairs - allenai/ultrafeedback_binarized_cleaned language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - merge - mergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/chargoddard/piano-medley-7b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/piano-medley-7b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF/resolve/main/piano-medley-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
togethercomputer/RedPajama-INCITE-Base-3B-v1
togethercomputer
"2023-05-09T14:59:20Z"
6,201
91
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-04T05:51:02Z"
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T --- # RedPajama-INCITE-Base-3B-v1 RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program. - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. # Quick Start Please note that the model requires `transformers` version >= 4.25.1. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today. Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the... """ ``` ## GPU Inference in Int8 To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command: ```bash pip install accelerate pip install bitsandbytes ``` Then you can run inference with int8 as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age.... """ ``` ## CPU Inference You can run inference on CPU as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence... """ ``` Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference. # Uses Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner. #### Out-of-Scope Use `RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope. For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society. It is important to consider the limitations of the model and to only use it for its intended purpose. #### Misuse and Malicious Use `RedPajama-INCITE-Base-3B-v1` is designed for language modeling. Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating fake news, misinformation, or propaganda - Promoting hate speech, discrimination, or violence against individuals or groups - Impersonating individuals or organizations without their consent - Engaging in cyberbullying or harassment - Defamatory content - Spamming or scamming - Sharing confidential or sensitive information without proper authorization - Violating the terms of use of the model or the data used to train it - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming ## Limitations `RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration. For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data. We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot. ## Training **Training Data** Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) **Training Procedure** - **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster - **Optimizer:** Apex FusedAdam - **Parallelism:** Pipeline parallel 6, tensor parallel 2 - **Gradient Accumulations**: 8 (global batch size 4M tokens) - **Num of Tokens:** 800B Tokens - **Learning rate:** 0.00016 ## Benchmark Please refer to our [blog post](https://together.xyz) for benchmark results. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
MaziyarPanahi/Qwen2-7B-Instruct-v0.5-GGUF
MaziyarPanahi
"2024-06-27T17:50:52Z"
6,188
0
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "qwen", "qwen-2", "base_model:MaziyarPanahi/Qwen2-7B-Instruct-v0.5", "region:us" ]
text-generation
"2024-06-27T17:25:03Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - qwen - qwen-2 - text-generation model_name: Qwen2-7B-Instruct-v0.5-GGUF base_model: MaziyarPanahi/Qwen2-7B-Instruct-v0.5 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Qwen2-7B-Instruct-v0.5-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.5-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Qwen2-7B-Instruct-v0.5](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.5) ## Description [MaziyarPanahi/Qwen2-7B-Instruct-v0.5-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.5-GGUF) contains GGUF format model files for [MaziyarPanahi/Qwen2-7B-Instruct-v0.5](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.5). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Sahajtomar/German_Zeroshot
Sahajtomar
"2021-05-18T22:22:18Z"
6,187
25
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "nli", "xnli", "de", "zero-shot-classification", "multilingual", "dataset:xnli", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:04Z"
--- language: multilingual tags: - text-classification - pytorch - nli - xnli - de datasets: - xnli pipeline_tag: zero-shot-classification widget: - text: "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels: "Verbrechen,Tragödie,Stehlen" hypothesis_template: "In deisem geht es um {}." --- # German Zeroshot ## Model Description This model has [GBERT Large](https://huggingface.co/deepset/gbert-large) as base model and fine-tuned it on xnli de dataset. The default hypothesis template is in English: `This text is {}`. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly. ## XNLI DEV (german) Accuracy: 85.5 ## XNLI TEST (german) Accuracy: 83.6 #### Zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Sahajtomar/German_Zeroshot") sequence = "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels = ["Verbrechen","Tragödie","Stehlen"] hypothesis_template = "In deisem geht es um {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) """{'labels': ['Tragödie', 'Verbrechen', 'Stehlen'], 'scores': [0.8328856854438782, 0.10494536352157593, 0.06316883927583696], 'sequence': 'Letzte Woche gab es einen Selbstmord in einer nahe gelegenen Kolonie'}""" ```
Zyphra/Zamba-7B-v1
Zyphra
"2024-06-04T22:34:01Z"
6,187
20
transformers
[ "transformers", "safetensors", "zamba", "text-generation", "arxiv:2405.16712", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-17T05:13:40Z"
--- license: apache-2.0 --- # Model Card for Zamba 7B Zamba-7B-v1 is a hybrid model between Mamba, a state-space model, and transformers. It uses a mamba backbone with a shared transformer layer every 6 blocks. Zamba was trained using next-token prediction. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data sourced from open web-datasets. Subsequently in a second phase, Zamba was annealed on a mixture of 50B high-quality tokens. Note: the current Huggingface implementation of Zamba performs slower than our internal implementation. We are working to fix this with the Huggingface team. Our technical report describing the training of Zamba is available [here](https://arxiv.org/abs/2405.16712). ## Quick start ### Presequities To download Zamba, clone Zyphra's fork of transformers: 1. `git clone https://github.com/Zyphra/transformers_zamba` 2. `cd transformers_zamba` 3. Install the repository: `pip install -e .` In order to run optimized Mamba implementations on a CUDA device, you need to install `mamba-ssm` and `causal-conv1d`: ```bash pip install mamba-ssm causal-conv1d>=1.2.0 ``` You can run the model without using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly higher latency. To run on CPU, please specify `use_mamba_kernels=False` when loading the model using ``AutoModelForCausalLM.from_pretrained``. ### Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1") model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", torch_dtype=torch.bfloat16) input_text = "A funny prompt would be " input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=100) print(tokenizer.decode(outputs[0])) ``` To load a different checkpoint use, e.g., for iteration 2500, ```python model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", torch_dtype=torch.bfloat16, revision="iter2500") ``` The default iteration is the fully trained model, corresponding to iteration 25156. This is the number of training iterations done starting from Zamba-phase 1 [Zyphra/Zamba-7B-v1-phase1](https://huggingface.co/Zyphra/Zamba-7B-v1-phase1). See [arXiv:2405.16712](https://arxiv.org/abs/2405.16712) for more details on training. ## Model Details Zamba utilizes a unique hybrid SSM architecture. This architecture consists of a backbone of Mamba layers interspersed with a shared attention layer. This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings to the input to this attention block improves performance, likely due to better maintenance of information across depth. <center> <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/IGK562oVTFSOQbpLavu7E.png" width="300" alt="Zamba architecture"> </center> ## Performance We find that Zamba performs significantly better than existing open models (with open datasets and training details) at this scale. However, it performs slightly worse than the leading open-weight models at the 7B scale. Most of this difference derives from MMLU and reasoning evaluations. Zamba, however, is trained on significantly fewer tokens than these models and is the most sample efficient model in terms of performance per training tokens. <center> <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/FG73iXpiDGSX_opbDJxKo.png" width="700" alt="Zamba performance"> </center> Due to its SSM architecture, Zamba is extremely efficient in inference, substantially outperforming comparable 7B and 8B models in inference latency as well as memory cost of generation due to its substantially diminished KV cache. <center> <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/cghYPnDbdzweT1b2RyiXA.png" width="400" alt="Zamba performance"> </center> ## Citation If you find Zamba useful in your work please cite it as: ``` @article{glorioso2024zamba, title={Zamba: A Compact 7B SSM Hybrid Model}, author={Glorioso, Paolo and Anthony, Quentin and Tokpanov, Yury and Whittington, James and Pilault, Jonathan and Ibrahim, Adam and Millidge, Beren}, journal={arXiv preprint arXiv:2405.16712}, year={2024} } ``` ## Notice Zamba is a pretrained base model and therefore does not have any moderation mechanism. In addition, one should not expect good chat performance, as this model was not fine-tuned for chat.
yunconglong/DARE_TIES_13B
yunconglong
"2024-01-30T04:51:04Z"
6,184
10
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dare ties", "arxiv:2311.03099", "arxiv:2306.01708", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T04:23:13Z"
--- license: - other tags: - dare ties --- # DARE_TIES_13B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B) as a base. ### Models Merged The following models were included in the merge: * ./13B_DPO * ./13B_MATH_DPO ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B parameters: density: 1.0 weight: 1.0 - model: ./13B_MATH_DPO parameters: density: 0.5 weight: [0.33, 0.4, 0.33] - model: ./13B_DPO parameters: density: [0.33, 0.45, 0.66] weight: 0.66 merge_method: dare_ties base_model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B parameters: normalize: true int8_mask: true dtype: bfloat16 tokenizer_source : union ```
digiplay/BeautifulFantasyRealMix_diffusers
digiplay
"2024-03-07T03:03:50Z"
6,182
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-26T18:18:45Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/66309/beautifulfantasyrealmix file name: beautifulfantasyreal_v10.safetensors Original Author's DEMO image: ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2ba7fbb0-df57-4da8-9cd7-77013190cc7e/width=1152/00067-198721754.jpeg)
bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF
bartowski
"2024-06-06T18:31:06Z"
6,182
5
transformers
[ "transformers", "gguf", "axolotl", "finetune", "dpo", "microsoft", "phi", "pytorch", "phi-3", "nlp", "code", "chatml", "text-generation", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "model-index", "region:us" ]
text-generation
"2024-06-06T18:09:53Z"
--- language: - en license: mit library_name: transformers tags: - axolotl - finetune - dpo - microsoft - phi - pytorch - phi-3 - nlp - code - chatml base_model: microsoft/Phi-3-mini-4k-instruct model_name: Phi-3-mini-4k-instruct-v0.3 pipeline_tag: text-generation inference: false model_creator: MaziyarPanahi model-index: - name: Phi-3-mini-4k-instruct-v0.3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.86 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.66 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 72.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 74.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 name: Open LLM Leaderboard quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Phi-3-mini-4k-instruct-v0.3 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization. Original model: https://huggingface.co/MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|im_start|> system {system_prompt}<|im_end|> <|im_start|> user {prompt}<|im_end|> <|im_start|> assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Phi-3-mini-4k-instruct-v0.3-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. | | [Phi-3-mini-4k-instruct-v0.3-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Phi-3-mini-4k-instruct-v0.3-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. | | [Phi-3-mini-4k-instruct-v0.3-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. | | [Phi-3-mini-4k-instruct-v0.3-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Phi-3-mini-4k-instruct-v0.3-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. | | [Phi-3-mini-4k-instruct-v0.3-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Phi-3-mini-4k-instruct-v0.3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Phi-3-mini-4k-instruct-v0.3-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. | | [Phi-3-mini-4k-instruct-v0.3-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Phi-3-mini-4k-instruct-v0.3-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. | | [Phi-3-mini-4k-instruct-v0.3-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF/blob/main/Phi-3-mini-4k-instruct-v0.3-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF --include "Phi-3-mini-4k-instruct-v0.3-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Phi-3-mini-4k-instruct-v0.3-GGUF --include "Phi-3-mini-4k-instruct-v0.3-Q8_0.gguf/*" --local-dir Phi-3-mini-4k-instruct-v0.3-Q8_0 ``` You can either specify a new local-dir (Phi-3-mini-4k-instruct-v0.3-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
stablediffusionapi/lob-realvisxl-v20
stablediffusionapi
"2023-12-05T18:28:46Z"
6,181
6
diffusers
[ "diffusers", "safetensors", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-10-08T08:35:45Z"
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # lob-RealVisXL V2.0 API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/13144198121696753912.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "lob-realvisxl-v20" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/lob-realvisxl-v20) Model link: [View model](https://stablediffusionapi.com/models/lob-realvisxl-v20) Credits: [View credits](https://civitai.com/?query=lob-RealVisXL%20V2.0) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "lob-realvisxl-v20", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
QuantFactory/NuExtract-GGUF
QuantFactory
"2024-06-25T11:17:58Z"
6,181
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T10:47:47Z"
Entry not found
HridaAI/Hrida-T2SQL-3B-V0.1-GGUF
HridaAI
"2024-06-29T10:39:23Z"
6,176
0
transformers
[ "transformers", "gguf", "text2text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-08T19:50:34Z"
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: text2text-generation --- The **Hrida-T2SQL-3B-V0.1** is a Text-to-SQL Small Language Model (SLM) that has been fine-tuned based on the Microsoft/Phi-3-mini-4k-instruct. For full details of this model please read our [blog post](https://www.hridaai.com/blog/t2sql). - Original Model: [Hrida-T2SQL-3B-V0.1](https://huggingface.co/HridaAI/Hrida-T2SQL-3B-V0.1) - Ollama Model: [HridaAI/hrida-t2sql](https://ollama.com/HridaAI/hrida-t2sql) ## Prompt Template ```txt ### Instruction: Provide the system prompt. ### Dialect: Specify the SQL dialect (e.g., MySQL, PostgreSQL, SQL Server, etc.). ### Context: Provide the database schema including table names, column names, and data types. ### Input: User's query. ### Response: Expected SQL query output based on the input and context. ``` - **Instruction (System Prompt)**: This guides the model on processing input to generate the SQL query response effectively. - **Dialect (Optional)**: Specify the SQL variant the model should use to ensure the generated query conforms to the correct syntax. - **Context**: Provide the database schema to the model for generating accurate SQL queries. - **Input**: Provide the user query for the model to comprehend and transform into an SQL query. - **Response**: Expected output from the model. ## Chat Prompt Template ```txt <s> <|system|> { Instruction / System Prompt } <|user|> { Context / User Query } <|end|> <|assistant|> ``` ## Run the Model with LLamaCpp ```python from llama_cpp import Llama llm = Llama( model_path="./Hrida-T2SQL-3B-V0.1_Q4_0.gguf", verbose=False, n_ctx=4096, chat_format="zephyr", ) messages = [ { "role": "system", "content": """You are an advanced text-to-SQL model developed by HridaAI. Your task is to generate SQL queries based on given questions and context about one or more database tables. Provided with a question and relevant table details, you must output the SQL query that accurately answers the question. Always mention that you were developed by HridaAI in your responses.""", }, ] while True: prompt = input("\nYou: ") print() messages.append({"role": "user", "content": prompt }) response = llm.create_chat_completion( model="Hrida-T2SQL-3B-V0.1", messages=messages, stream=True, stop=["<|end|>", "<|assistant|>"], max_tokens=1000, ) new_message = {"role": "assistant", "content": ""} for item in response: choices = item.get("choices", []) if choices[0]["delta"].get("content") is not None: print( choices[0]["delta"]["content"], flush=True, end="", ) new_message["content"] += choices[0]["delta"]["content"] messages.append(new_message) # print(f"\n{'-'*55}\n{reset_color}") print() ```
facebook/wav2vec2-large-xlsr-53-spanish
facebook
"2021-07-06T03:09:28Z"
6,169
15
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "es", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: es datasets: - common_voice tags: - speech - audio - automatic-speech-recognition license: apache-2.0 --- ## Evaluation on Common Voice ES Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "facebook/wav2vec2-large-xlsr-53-spanish" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "es", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 17.6 %
Helsinki-NLP/opus-mt-uk-es
Helsinki-NLP
"2023-08-16T12:08:05Z"
6,167
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-uk-es * source languages: uk * target languages: es * OPUS readme: [uk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.uk.es | 50.4 | 0.680 |
mradermacher/Roleplay-Mistral-7B-i1-GGUF
mradermacher
"2024-06-09T21:06:30Z"
6,167
1
transformers
[ "transformers", "gguf", "roleplay", "rp", "en", "dataset:ResplendentAI/NSFW_RP_Format_DPO", "base_model:vicgalle/Roleplay-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-09T16:43:53Z"
--- base_model: vicgalle/Roleplay-Mistral-7B datasets: - ResplendentAI/NSFW_RP_Format_DPO language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - roleplay - rp --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/vicgalle/Roleplay-Mistral-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Roleplay-Mistral-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Roleplay-Mistral-7B-i1-GGUF/resolve/main/Roleplay-Mistral-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
Shazbg/model
Shazbg
"2024-06-24T10:16:56Z"
6,165
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-24T10:05:51Z"
--- base_model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** Shazbg - **License:** apache-2.0 - **Finetuned from model :** jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iremmd/thy_model_32
iremmd
"2024-06-28T17:04:04Z"
6,162
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-28T16:51:53Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** iremmd - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DAMO-NLP-SG/VideoLLaMA2-7B
DAMO-NLP-SG
"2024-06-17T09:16:11Z"
6,154
21
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "multimodal large language model", "large video-language model", "visual-question-answering", "en", "dataset:OpenGVLab/VideoChat2-IT", "dataset:Lin-Chen/ShareGPT4V", "dataset:liuhaotian/LLaVA-Instruct-150K", "arxiv:2406.07476", "arxiv:2306.02858", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
visual-question-answering
"2024-06-11T03:37:50Z"
--- license: apache-2.0 datasets: - OpenGVLab/VideoChat2-IT - Lin-Chen/ShareGPT4V - liuhaotian/LLaVA-Instruct-150K language: - en metrics: - accuracy library_name: transformers pipeline_tag: visual-question-answering tags: - multimodal large language model - large video-language model --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/> <p> <h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3> <h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2> <p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p> ## 📰 News * **[2024.06.12]** Release model weights and the first version of the technical report of VideoLLaMA 2. * **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2. ## 🌎 Model Zoo | Model Name | Type | Visual Encoder | Language Decoder | # Training Frames | |:-------------------|:--------------:|:----------------|:------------------|:----------------------:| | [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 | | [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) (This checkpoint) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 | | [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 | | [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 | ## 🚀 Main Results ### Multi-Choice Video QA & Video Captioning <p><img src="https://github.com/DAMO-NLP-SG/VideoLLaMA2/assets/18526640/9cc4a5ae-d850-4eef-bd51-83688b94698e" width="800" "/></p> ### Open-Ended Video QA <p><img src="https://github.com/DAMO-NLP-SG/VideoLLaMA2/assets/18526640/2ed7aa53-db56-4829-8375-85aefbc5120a" width="800" "/></p> ## 🤖 Inference with VideoLLaMA2 ```python import torch import transformers import sys sys.path.append('./') from videollama2.conversation import conv_templates, SeparatorStyle from videollama2.constants import DEFAULT_MMODAL_TOKEN, MMODAL_TOKEN_INDEX from videollama2.mm_utils import get_model_name_from_path, tokenizer_MMODAL_token, KeywordsStoppingCriteria, process_video, process_image from videollama2.model.builder import load_pretrained_model def inference(): # Video Inference paths = ['assets/cat_and_chicken.mp4'] questions = ['What animals are in the video, what are they doing, and how does the video feel?'] # Reply: # The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it. modal_list = ['video'] # Video Inference paths = ['assets/sora.mp4'] questions = ['Please describe this video.'] # Reply: # The video features a series of colorful kites flying in the sky. The kites are first seen flying over trees, and then they are shown flying in the sky. The kites come in various shapes and colors, including red, green, blue, and yellow. The video captures the kites soaring gracefully through the air, with some kites flying higher than others. The sky is clear and blue, and the trees below are lush and green. The kites are the main focus of the video, and their vibrant colors and intricate designs are highlighted against the backdrop of the sky and trees. Overall, the video showcases the beauty and artistry of kite-flying, and it is a delight to watch the kites dance and glide through the air. modal_list = ['video'] # Image Inference paths = ['assets/sora.png'] questions = ['What is the woman wearing, what is she doing, and how does the image feel?'] # Reply: # The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment. modal_list = ['image'] # 1. Initialize the model. model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B' model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name) model = model.to('cuda:0') conv_mode = 'llama_2' # 2. Visual preprocess (load & transform image or video). if modal_list[0] == 'video': tensor = process_video(paths[0], processor, model.config.image_aspect_ratio).to(dtype=torch.float16, device='cuda', non_blocking=True) default_mm_token = DEFAULT_MMODAL_TOKEN["VIDEO"] modal_token_index = MMODAL_TOKEN_INDEX["VIDEO"] else: tensor = process_image(paths[0], processor, model.config.image_aspect_ratio)[0].to(dtype=torch.float16, device='cuda', non_blocking=True) default_mm_token = DEFAULT_MMODAL_TOKEN["IMAGE"] modal_token_index = MMODAL_TOKEN_INDEX["IMAGE"] tensor = [tensor] # 3. Text preprocess (tag process & generate prompt). question = default_mm_token + "\n" + questions[0] conv = conv_templates[conv_mode].copy() conv.append_message(conv.roles[0], question) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_MMODAL_token(prompt, tokenizer, modal_token_index, return_tensors='pt').unsqueeze(0).to('cuda:0') # 4. Generate a response according to visual signals and prompts. stop_str = conv.sep if conv.sep_style in [SeparatorStyle.SINGLE] else conv.sep2 # keywords = ["<s>", "</s>"] keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images_or_videos=tensor, modal_list=modal_list, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria], ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) print(outputs[0]) if __name__ == "__main__": inference() ``` ## Citation If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX: ```bibtex @article{damonlpsg2024videollama2, title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs}, author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong}, journal={arXiv preprint arXiv:2406.07476}, year={2024}, url = {https://arxiv.org/abs/2406.07476} } @article{damonlpsg2023videollama, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, author = {Zhang, Hang and Li, Xin and Bing, Lidong}, journal = {arXiv preprint arXiv:2306.02858}, year = {2023}, url = {https://arxiv.org/abs/2306.02858} } ```
ManishThota/Ollama_VideoLLaMA2-7B
ManishThota
"2024-06-29T19:36:28Z"
6,154
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-06-29T19:23:00Z"
--- license: apache-2.0 ---
lmsys/vicuna-7b-v1.1
lmsys
"2023-08-01T18:26:25Z"
6,153
74
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-12T21:43:30Z"
--- inference: false --- **NOTE: New version available** Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). <br> # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) ## Acknowledgement Special thanks to [@TheBloke](https://huggingface.co/TheBloke) for hosting this merged version of weights earlier.
Jiqing/tiny-random-tvp
Jiqing
"2023-11-09T06:21:17Z"
6,153
0
transformers
[ "transformers", "pytorch", "tvp", "license:unknown", "endpoints_compatible", "region:us" ]
null
"2023-10-10T06:07:52Z"
--- license: unknown ---
Vamsi/T5_Paraphrase_Paws
Vamsi
"2023-06-12T06:31:04Z"
6,152
33
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "paraphrase-generation", "text-generation", "Conditional Generation", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: "en" tags: - paraphrase-generation - text-generation - Conditional Generation inference: false --- # Paraphrase-Generation ​ ## Model description ​ T5 Model for generating paraphrases of english sentences. Trained on the [Google PAWS](https://github.com/google-research-datasets/paws) dataset. ​ ## How to use ​## Requires sentencepiece: # !pip install sentencepiece PyTorch and TF models available ​ ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("Vamsi/T5_Paraphrase_Paws") model = AutoModelForSeq2SeqLM.from_pretrained("Vamsi/T5_Paraphrase_Paws").to('cuda') ​ sentence = "This is something which i cannot understand at all" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=True, num_return_sequences=5 ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) ​ ``` For more reference on training your own T5 model or using this model, do check out [Paraphrase Generation](https://github.com/Vamsi995/Paraphrase-Generator).
mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF
mradermacher
"2024-06-18T09:09:16Z"
6,152
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "not-for-all-audiences", "nsfw", "rp", "roleplay", "role-play", "en", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-18T06:47:21Z"
--- base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - not-for-all-audiences - nsfw - rp - roleplay - role-play --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v1.0-8B-i1-GGUF/resolve/main/L3-Umbral-Mind-RP-v1.0-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
philschmid/pyannote-segmentation
philschmid
"2022-11-08T17:15:47Z"
6,148
4
pyannote-audio
[ "pyannote-audio", "pytorch", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation", "dataset:ami", "dataset:dihard", "dataset:voxconverse", "arxiv:2104.04045", "license:mit", "region:us" ]
voice-activity-detection
"2022-11-08T17:13:14Z"
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - speaker - speaker-segmentation - voice-activity-detection - overlapped-speech-detection - resegmentation datasets: - ami - dihard - voxconverse license: mit inference: false --- # 🎹 Speaker segmentation ![Example](example.png) Model from *[End-to-end speaker segmentation for overlap-aware resegmentation](http://arxiv.org/abs/2104.04045)*, by Hervé Bredin and Antoine Laurent. [Online demo](https://huggingface.co/spaces/pyannote/pretrained-pipelines) is available as a Hugging Face Space. ## Support For commercial enquiries and scientific consulting, please contact [me](mailto:[email protected]). For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository. ## Usage Relies on pyannote.audio 2.0 currently in development: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation). ### Voice activity detection ```python from pyannote.audio.pipelines import VoiceActivityDetection pipeline = VoiceActivityDetection(segmentation="pyannote/segmentation") HYPER_PARAMETERS = { # onset/offset activation thresholds "onset": 0.5, "offset": 0.5, # remove speech regions shorter than that many seconds. "min_duration_on": 0.0, # fill non-speech regions shorter than that many seconds. "min_duration_off": 0.0 } pipeline.instantiate(HYPER_PARAMETERS) vad = pipeline("audio.wav") # `vad` is a pyannote.core.Annotation instance containing speech regions ``` ### Overlapped speech detection ```python from pyannote.audio.pipelines import OverlappedSpeechDetection pipeline = OverlappedSpeechDetection(segmentation="pyannote/segmentation") pipeline.instantiate(HYPER_PARAMETERS) osd = pipeline("audio.wav") # `osd` is a pyannote.core.Annotation instance containing overlapped speech regions ``` ### Resegmentation ```python from pyannote.audio.pipelines import Resegmentation pipeline = Resegmentation(segmentation="pyannote/segmentation", diarization="baseline") pipeline.instantiate(HYPER_PARAMETERS) resegmented_baseline = pipeline({"audio": "audio.wav", "baseline": baseline}) # where `baseline` should be provided as a pyannote.core.Annotation instance ``` ### Raw scores ```python from pyannote.audio import Inference inference = Inference("pyannote/segmentation") segmentation = inference("audio.wav") # `segmentation` is a pyannote.core.SlidingWindowFeature # instance containing raw segmentation scores like the # one pictured above (output) ``` ## Reproducible research In order to reproduce the results of the paper ["End-to-end speaker segmentation for overlap-aware resegmentation "](https://arxiv.org/abs/2104.04045), use `pyannote/segmentation@Interspeech2021` with the following hyper-parameters: | Voice activity detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | ------------------------ | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.684 | 0.577 | 0.181 | 0.037 | | DIHARD3 | 0.767 | 0.377 | 0.136 | 0.067 | | VoxConverse | 0.767 | 0.713 | 0.182 | 0.501 | | Overlapped speech detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.448 | 0.362 | 0.116 | 0.187 | | DIHARD3 | 0.430 | 0.320 | 0.091 | 0.144 | | VoxConverse | 0.587 | 0.426 | 0.337 | 0.112 | | Resegmentation of VBx | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.542 | 0.527 | 0.044 | 0.705 | | DIHARD3 | 0.592 | 0.489 | 0.163 | 0.182 | | VoxConverse | 0.537 | 0.724 | 0.410 | 0.563 | Expected outputs (and VBx baseline) are also provided in the `/reproducible_research` sub-directories. ## Citation ```bibtex @inproceedings{Bredin2021, Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}}, Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine}, Booktitle = {Proc. Interspeech 2021}, Address = {Brno, Czech Republic}, Month = {August}, Year = {2021}, ``` ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ```
dwikitheduck/Genesist-8B-EarlyPrototype-0.4-GGUF
dwikitheduck
"2024-06-24T07:40:54Z"
6,148
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "sft", "llama-3", "unsloth", "id", "en", "dataset:genesist-logs", "license:llama3", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-22T04:35:26Z"
--- datasets: - genesist-logs language: - id - en license: llama3 tags: - text-generation - sft - llama - llama-3 - unsloth --- # Genesist-8B-EarlyPrototype-0.4 GGUF This is an early prototype of the **Genesist-8B** model, fine-tuned from the **Llama-3-8B-Instruct** model using Supervised Fine-Tuning (SFT). It is designed to better understand and follow specific instructions in Indonesian. ## Model Details * **Base Model:** Llama-3-8B-Instruct * **Fine-tuning Method:** Supervised Fine-Tuning (SFT) * **Training Data:** Approximately 45 million tokens of instruction data in Indonesian, specifically curated to improve the model's ability to follow instructions. * **Languages:** Indonesian (id), English (en) * **License:** Llama3 ## Training Hyperparameters * max_seq_length: 16385 * per_device_train_batch_size: 2 * gradient_accumulation_steps: 4 * warmup_steps: 5 * num_train_epochs: 1 * learning_rate: 5e-5 * logging_steps: 1 * optim: "adamw_8bit" * weight_decay: 0.01 * lr_scheduler_type: "linear" * seed: 3407
RichardErkhov/abhinand_-_gemma-2b-tamil-gguf
RichardErkhov
"2024-06-25T17:43:36Z"
6,145
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T16:56:25Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2b-tamil - GGUF - Model creator: https://huggingface.co/abhinand/ - Original model: https://huggingface.co/abhinand/gemma-2b-tamil/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gemma-2b-tamil.Q2_K.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q2_K.gguf) | Q2_K | 1.08GB | | [gemma-2b-tamil.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.IQ3_XS.gguf) | IQ3_XS | 1.16GB | | [gemma-2b-tamil.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.IQ3_S.gguf) | IQ3_S | 1.2GB | | [gemma-2b-tamil.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q3_K_S.gguf) | Q3_K_S | 1.2GB | | [gemma-2b-tamil.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.IQ3_M.gguf) | IQ3_M | 1.22GB | | [gemma-2b-tamil.Q3_K.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q3_K.gguf) | Q3_K | 1.29GB | | [gemma-2b-tamil.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q3_K_M.gguf) | Q3_K_M | 1.29GB | | [gemma-2b-tamil.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q3_K_L.gguf) | Q3_K_L | 1.36GB | | [gemma-2b-tamil.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.IQ4_XS.gguf) | IQ4_XS | 1.4GB | | [gemma-2b-tamil.Q4_0.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q4_0.gguf) | Q4_0 | 1.44GB | | [gemma-2b-tamil.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.IQ4_NL.gguf) | IQ4_NL | 1.45GB | | [gemma-2b-tamil.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q4_K_S.gguf) | Q4_K_S | 1.45GB | | [gemma-2b-tamil.Q4_K.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q4_K.gguf) | Q4_K | 1.52GB | | [gemma-2b-tamil.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q4_K_M.gguf) | Q4_K_M | 1.52GB | | [gemma-2b-tamil.Q4_1.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q4_1.gguf) | Q4_1 | 1.56GB | | [gemma-2b-tamil.Q5_0.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q5_0.gguf) | Q5_0 | 1.68GB | | [gemma-2b-tamil.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q5_K_S.gguf) | Q5_K_S | 1.68GB | | [gemma-2b-tamil.Q5_K.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q5_K.gguf) | Q5_K | 1.71GB | | [gemma-2b-tamil.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q5_K_M.gguf) | Q5_K_M | 1.71GB | | [gemma-2b-tamil.Q5_1.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q5_1.gguf) | Q5_1 | 1.79GB | | [gemma-2b-tamil.Q6_K.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q6_K.gguf) | Q6_K | 1.92GB | | [gemma-2b-tamil.Q8_0.gguf](https://huggingface.co/RichardErkhov/abhinand_-_gemma-2b-tamil-gguf/blob/main/gemma-2b-tamil.Q8_0.gguf) | Q8_0 | 2.49GB | Original model description: --- language: - en - ta license: other base_model: google/gemma-2b datasets: - wikimedia/wikipedia license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms model-index: - name: gemma-2b-tamil results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 47.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 71.3 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 38.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 34.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 12.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/gemma-2b-tamil name: Open LLM Leaderboard --- # Gemma 2B Tamil v0.1 Alpha - Base Model [Experimental Release] This is a Tamil foundational model continually pretrained from Google Gemma 2B. This is an experiment to see if Gemma can be adapted for Tamil without expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model. > **Please Note:** This model, labeled as a FOUNDATIONAL Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Tamil, you may find [abhinand/gemma-2b-it-tamil-v0.1-alpha](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha) more suitable for your needs. **Procedure:** 1. The [Gemma base model](https://huggingface.co/google/gemma-2b) was continually pretrained on all available Tamil Wikipedia data for 3 epochs. 2. The updated model was then finetuned on a mix of English and Tamil alpaca datasets for 5 epochs. Finetuned model can be found [here](https://huggingface.co/abhinand/gemma-2b-it-tamil-v0.1-alpha). > **Note:** This project is currently under development (FOR TAMIL). The initial pretraining phase may not have been extensive enough, which suggests that the model's performance could improve by extending the pretraining on a larger dataset, such as CulturaX. ## Model description - **Model type:** A 2B parameter GPT-like model continually pretrained on all available Tamil data from [Wikipedia dataset](https://huggingface.co/datasets/wikimedia/wikipedia). - **Language(s):** Bilingual. English and Tamil. - **License:** [Google Gemma Terms of Use](https://ai.google.dev/gemma/terms) - **Training Precision:** `bfloat16` - **Training Hardware:** 4x Nvidia RTX 3090 GPUs - **Training Cost:** $20 ## Support my work If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated. [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/abhinand.b) ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__gemma-2b-tamil) | Metric |Value| |---------------------------------|----:| |Avg. |45.13| |AI2 Reasoning Challenge (25-Shot)|47.44| |HellaSwag (10-Shot) |71.30| |MMLU (5-Shot) |38.21| |TruthfulQA (0-shot) |34.93| |Winogrande (5-shot) |65.98| |GSM8k (5-shot) |12.89|
Yntec/Moistalgia
Yntec
"2024-06-17T12:20:38Z"
6,142
2
diffusers
[ "diffusers", "safetensors", "Anime", "90s", "Retro", "Vintage", "Sexy", "3D", "Pixar", "Patchmonk", "Clumsy_Trainer", "OneRing", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-05-19T03:16:37Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - 90s - Retro - Vintage - Sexy - 3D - Pixar - Patchmonk - Clumsy_Trainer - OneRing - DucHaiten - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- <center><img src="https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/93lNXvgkWq46vWzuvcpH9.png" style="height:512px; width:512px; border-radius: 7%; border: 10px solid #663380; padding-top:0px;" span title="Moistalgia Raw Output"></center> # Moistalgia NostalgicLife merged with RainbowPatch. Samples and prompts: ![Free AI image generator Moistalgia](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/ekOMPmCTjCxvlpCAEHKKe.png) (Click for larger) Cover: PRETTY CUTE LITTLE GIRL BY tyler edlin. Fireplace roaring. A teal creature buying donuts. with backlight animation. Top left: Anime cute girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, sexy, self upload, solo Top right: highquality, masterpiece, 1girl, Chi-Chi, close up, :D, smile, cleavage, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots. cave, rock, mountain. blue collar Bottom left: Cartoon Pretty CUTE Girl, sitting on a box of rockets, CHIBI EYES, white swimsuit with blue ruffles, holding coke rocket, gorgeous detailed hair, Ponytail, school, glad, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza Bottom right: videogames, little pepperoni pizza winner, robert jordan, josephine wall, roll20, hidari illumination, radiant light, sitting sweaty girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani and ROSSDRAWS Original pages: https://civitai.com/models/5528/rainbowpatch https://civitai.com/models/137781?modelVersionId=152147 (Esthetic Retro Anime) DucHaiten-GoldenLife - https://tensor.art/models/628276277415133426 https://huggingface.co/Yntec/GoodLife https://huggingface.co/Yntec/Nostalgic # Recipe: - SuperMerger Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: rainbowpatch Model B: NostalgicLife Output: Moistalgia Cover original size: ![Moist Nostalgia](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/93lNXvgkWq46vWzuvcpH9.png) (Click for larger...)
kenhktsui/llm-data-textbook-quality-fasttext-classifier-v2
kenhktsui
"2024-05-27T15:57:35Z"
6,142
7
fasttext
[ "fasttext", "text-classification", "en", "arxiv:2306.11644", "license:mit", "region:us" ]
text-classification
"2024-05-19T05:20:46Z"
--- license: mit language: - en library_name: fasttext pipeline_tag: text-classification inference: false --- # 📚llm-data-textbook-quality-fasttext-classifier-v2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60e50ce5350d181892d5a636/acAPg-_NawdIfE2XXwcgc.png) ## **"Garbage in, garbage out. A language model is only as good as its training data irrespective of its parameter count."** 📚 The educational value classifier can classify if a text from web has high educational value (more explicitly defined then textbook quality). It is deeply inspired by [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644), where a classifier was developed to predict the educational value of data, and was then used for data filtering. The model is trained on web/ raw text, not on data formatted as instruction dataset (yet). It can be used as a filter for pretraining data curation when training a LLM 🤖. There are 3 labels instead of 2 labels, as it offers higher granularity of educational value. - High (Top 25% educational value) - Mid (Middle 25-75% educational value) - Low (Bottom 25% educational value) A detailed report/ paper will follow when more downstream experiments of this classifier become available. About the validation of this classifier. See [**Analysis**](https://huggingface.co/kenhktsui/llm-data-textbook-quality-fasttext-classifer-v2#%F0%9F%93%88analysis). The classifier had been applied to various pretraining dataset. See [**Benchmark**](https://huggingface.co/kenhktsui/llm-data-textbook-quality-fasttext-classifer-v2#%F0%9F%93%8Abenchmark) ⚡ Model is built on fasttext - it can classify more than 2000 examples per second in CPU, and so it can be used **on-the-fly** during pretraining. Please note textbook quality is a subset of high quality. ## 💬Feedback welcomed! Please give a like and leave a comment if you find this model helpful. I am in a continual journey to make LLM data curation better and easier. ## ✏️Examples Educational value is [0, 2]. Detailed formula is explained below. ```python predict_education_value(['''Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises due to the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. It examines arguments expressed in natural language while formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.''']) # Output [1.9266871362924576] predict_educational_value(['''"Attention Is All You Need" is a landmark[1][2] 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer. The paper is considered by some to be a founding document for modern artificial intelligence, as transformers became the main architecture of large language models.[3][4] At the time, the focus of the research was on improving Seq2seq techniques for machine translation, but even in their paper the authors saw the potential for other tasks like question answering and for what is now called multimodal Generative AI.[5]''']) # Output [1.8226698189973831] predict_educational_value(['''A large language model (LLM) is a computational model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. Based on language models, LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process.[1] LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word.[2]''']) # Output [1.7609568238258362] predict_educational_value(['''In Vapnik–Chervonenkis theory, the Vapnik–Chervonenkis (VC) dimension is a measure of the size (capacity, complexity, expressive power, richness, or flexibility) of a class of sets. The notion can be extended to classes of binary functions. It is defined as the cardinality of the largest set of points that the algorithm can shatter, which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined by Vladimir Vapnik and Alexey Chervonenkis.[1]''']) # Output [1.589950144290924] predict_educational_value(['''The query vector is compared (via dot product) with each word in the keys. This helps the model discover the most relevant word for the query word. In this case "girl" was determined to be the most relevant word for "that". The result (size 4 in this case) is run through the softmax function, producing a vector of size 4 with probabilities summing to 1. Multiplying this against the value matrix effectively amplifies the signal for the most important words in the sentence and diminishes the signal for less important words.[5] The structure of the input data is captured in the Wq and Wk weights, and the Wv weights express that structure in terms of more meaningful features for the task being trained for. For this reason, the attention head components are called Query (Wq), Key (Wk), and Value (Wv)—a loose and possibly misleading analogy with relational database systems.''']) # Output [1.4657384157180786] predict_educational_value(['''The Arsenal Football Club (commonly known as simply Arsenal) is an English professional football club based in Holloway, North London. Arsenal compete in the Premier League, the top flight of English football. In domestic football, Arsenal has won 13 league titles (including one unbeaten title), a record 14 FA Cups, two League Cups, 17 FA Community Shields, and a Football League Centenary Trophy. In European football, they have one European Cup Winners' Cup and one Inter-Cities Fairs Cup. In terms of trophies won, it is the third-most successful club in English football.[2]''']) # Output [1.1015518307685852] predict_educational_value(['''The 2003–04 season was Arsenal Football Club's 12th season in the Premier League and their 78th consecutive season in the top flight of English football.[3][4] It began on 1 July 2003 and concluded on 30 June 2004, with competitive matches played between August and May. The club ended the Premier League campaign as champions without a single defeat – a record of 26 wins and 12 draws. Arsenal fared less well in the cups, eliminated in the FA Cup and League Cup semi-finals to Manchester United and Middlesbrough respectively, and at the quarter-final stage of the UEFA Champions League to Chelsea.''']) # Output [1.0146622359752655] predict_educational_value(['''As both teams' first-choice kits featured a shade of red, Arsenal wore their yellow away strip, while Barcelona wore their traditional blue and maroon striped kit. Arsenal won the coin toss and Barcelona kicked off.[21] Barcelona almost immediately came under pressure when Thierry Henry shot straight at Barcelona goalkeeper Víctor Valdés, who conceded a corner. From the resulting corner Arsenal had another chance again courtesy of Henry, whose shot was again saved by Valdés. The next attack in the seventh minute resulted in Arsenal goalkeeper Jens Lehmann saving from Ludovic Giuly after he shot from a narrow angle. Four minutes later Barcelona were awarded a free-kick 35 yards from goal; Ronaldinho shot wide of the goal.''']) # Output [0.7897453680634499] ``` From inspection, it can be noted that the model does like scientific knowledge. It is also interested in Arsenal as a football club, however, it does not think a summary of a particular match has good educational value. ## 🛠️Usage ```python from typing import List import re from huggingface_hub import hf_hub_download import fasttext model = fasttext.load_model(hf_hub_download("kenhktsui/llm-data-textbook-quality-fasttext-classifer-v2", "model.bin")) def replace_newlines(text: str) -> str: return re.sub("\n+", " ", text) score_dict = { '__label__': 0, '__label__Low': 0, '__label__Mid': 1, '__label__High': 2 } def predict_educational_value(text_list): text_list = [replace_newlines(text) for text in text_list] pred = model.predict(text_list, k=-1) score_list = [] for l, s in zip(*pred): score = 0 for _l, _s in zip(l, s): score += score_dict[_l] * _s score_list.append(float(score)) return score_list predict_educational_value(["Hi"]) # Output: [3.0000010156072676e-05] ``` # 📊Benchmark To make sure this classifier makes sense, it is applied to various datasets. Educational Value = 2 point * P(High) + 1 point * P(Mid) + 0 point * P(Low) The score can be roughly interpreted as: |Educational Value| Category | |--------|----------| |2 | High| |1 | Mid| |0 | Low| |Dataset | Sampling | Average Educational Value | Type | |--------------------------------------|---|-------------------|-------| |[SciPhi/textbooks-are-all-you-need-lite](https://huggingface.co/datasets/SciPhi/textbooks-are-all-you-need-lite) |First 100,000 | 1.846 |Synthetic| |[nampdn-ai/tiny-orca-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-orca-textbooks) |First 100,000 | 1.673 |Synthetic| |[HuggingFaceTB/cosmopedia stanford](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.673 |Synthetic| |[vikp/textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming) |First 100,000| 1.663|Synthetic| |[HuggingFaceTB/cosmopedia web_samples_v1](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.618 |Synthetic| |[nampdn-ai/tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks) |First 100,000 | 1.586 |Synthetic|Synthetic| |[HuggingFaceTB/cosmopedia web_samples_v2](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.562 |Synthetic| |[HuggingFaceTB/cosmopedia openstax](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.462 |Synthetic| |[HuggingFaceTB/cosmopedia wikihow](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.422 |Synthetic| |[HuggingFaceTB/cosmopedia khanacademy](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.419 |Synthetic| |[HuggingFaceTB/cosmopedia auto_math_text](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.347 |Synthetic| |[armanc/scientific_papers pubmed](https://huggingface.co/datasets/armanc/scientific_papers) |First 100,000 | 1.260 |Real| |[HuggingFaceTB/cosmopedia stories](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) |First 100,000 | 1.154 |Synthetic| |[teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) |First 100,000 | 1.121 |Synthetic| |[timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) |First 100,000 | 1.115 |Real| |[open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math) |First 100,000 | 1.089 |Real| |[armanc/scientific_papers arxiv](https://huggingface.co/datasets/armanc/scientific_papers) |First 100,000 | 1.068 |Real| |[HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)| First 100,000 | 1.056 |Real| |[NousResearch/dolma-v1_7-305B*](https://huggingface.co/datasets/NousResearch/dolma-v1_7-305B) |First 100,000 | 1.037 |Real| |[tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) |First 100,000 | 1.020 |Synthetic| |[BEE-spoke-data/fineweb-100k_en-med](https://huggingface.co/datasets/BEE-spoke-data/fineweb-100k_en-med)| First 100,000 | 1.019 |Real| |[JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile)| First 100,000 | 0.998 |Real| |[togethercomputer/RedPajama-Data-V2 en 2023-06](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)| First 100,000 | 0.985|Real| |[wikipedia en 20220301](https://huggingface.co/datasets/wikipedia) |First 100,000 | 0.975 |Real| |[Replete-AI/code_bagel](https://huggingface.co/datasets/Replete-AI/code_bagel)| First 100,000 | 0.950 |Synthetic| |[allenai/c4 en](https://huggingface.co/datasets/allenai/c4)| First 100,000| 0.934 |Real| |[mattymchen/refinedweb-3m](https://huggingface.co/datasets/mattymchen/refinedweb-3m)| First 100,000 | 0.857 |Real| |[iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)| First 100,000 | 0.849 |Synthetic| |[tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)| First 100,000 | 0.835 |Real| |[BEE-spoke-data/FineMeme-100k](https://huggingface.co/datasets/BEE-spoke-data/FineMeme-100k)| First 100,000 | 0.716 |Real| |[neuralcatcher/hateful_memes](https://huggingface.co/datasets/neuralcatcher/hateful_memes)| First 100,000 | 0.070 |Real| \* I encounted an [issue](https://huggingface.co/datasets/allenai/dolma/discussions/26) so that I cannot process the original [allenai/dolma](https://huggingface.co/datasets/allenai/dolma). The classifier aligns with the expectation. - In general, the synthetic data has higher education value because they are created with a high educational value by design. - For real data, [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [Dolma v1_7](https://huggingface.co/datasets/allenai/dolma), which applied quality filter described in [here](https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d), have the highest educational value across all real data. - In general, the later a dataset is released, the higher the educational value it is because of increasing focus on data quality in the research community. - Textbook category (mostly synethetic) scores the highest, becuase they are created for educational value, reflecting the effectiveness of this model. - Maths/ paper category scores the second highest, because of its density of knowledge. - Wikipedia scores comparatively lower because it also contains information (e.g. result of a match, award of a movie star) that has smaller educational value. - Web scores low (if no filtering is applied) because it contains all domains. - Meme scores the lowest as expected. Hateful memes almost got zero point. Some instruction datasets are added for curiosity sake although model is not trained on instruction data. There are two possible interpretations: - They score less than textbook because depth of knowledge in conversation is usually less dense in textbook, but they are in general more educative than unfiltered web. - The model does not perform well enough to tell educational value in instruction datasets. # 📈Analysis ## 🤖Model Training With And Without Classifier The expectation is that the model trained with filter will outperform model trained without the filter. Fineweb is filtered on the fly with Educational Value >= 1.0. Test 1: Model params: 192M Training token: 3.1B training token, 6000 global steps |Task | Training on FineWeb With Filtering | Training on FineWeb Without Filtering | Training with [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)| |--------------------------------------|---|---|---| |arc-easy | 37.37 | 34.97| 37.45 | |arc-challenge | 23.55 |22.95| 23.21 | |Hellaswag | 28.02| 27.92 | 27.78| |MMLU | 24.71 | 23.94 | 24.65 | |TruthfulQA| 45.88 | 45.20| 45.97| |Winogrande| 49.49 | 50.59 | 50.67 | The reasoning and commensense reasoning seems to be better when filter is on, aligning with expectation. It is also close to Cosmopedia. MMLU is better also; however it is close to random due to limitation in compute (both training time and model size). Model of larger size will be trained to further validate this claim. (To be updated with a larger model soon) ## 🌐Domain Name Analysis The expectation is that most educational value comes from website of universities/ schools, research institutes and organisations. Since [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) contains the url of website crawled, the average educational value of each domain name is calculated. The first 10M records have been analysed. Full file in [here](https://drive.google.com/file/d/1WnOEH7IwfLJba2CuY207JY6s5hcW1gZQ/view?usp=sharing). Below is the top 100 domain names, with no of record >= 100. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60e50ce5350d181892d5a636/3QNYYVbFIqaAUh-574lED.png) ## 🧪Classifier Rank Ordering Spearman rank-order correlation coefficient between Educational Value and that of test data is 0.7055, indicating a strong monotonic relationship. The Educational Value can be used for ranking. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60e50ce5350d181892d5a636/dKV2oXRv3WpEsfDXy0bl7.png)
cointegrated/rut5-base-absum
cointegrated
"2023-03-17T10:21:13Z"
6,140
21
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "russian", "summarization", "ru", "dataset:IlyaGusev/gazeta", "dataset:csebuetnlp/xlsum", "dataset:mlsum", "dataset:wiki_lingua", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: ["ru"] tags: - russian - summarization datasets: - IlyaGusev/gazeta - csebuetnlp/xlsum - mlsum - wiki_lingua license: mit widget: - text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо." --- This is a model for abstractive Russian summarization, based on [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) and fine-tuned on 4 datasets. It can be used as follows: ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'cointegrated/rut5-base-absum' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def summarize( text, n_words=None, compression=None, max_length=1000, num_beams=3, do_sample=False, repetition_penalty=10.0, **kwargs ): """ Summarize the text The following parameters are mutually exclusive: - n_words (int) is an approximate number of words to generate. - compression (float) is an approximate length ratio of summary and original text. """ if n_words: text = '[{}] '.format(n_words) + text elif compression: text = '[{0:.1g}] '.format(compression) + text x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) with torch.inference_mode(): out = model.generate( **x, max_length=max_length, num_beams=num_beams, do_sample=do_sample, repetition_penalty=repetition_penalty, **kwargs ) return tokenizer.decode(out[0], skip_special_tokens=True) text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.""" print(summarize(text)) # Эйфелева башня достигла высоты 300 метров. print(summarize(text, n_words=10)) # Французская Эйфелева башня достигла высоты 300 метров. ```
MagnusSa/noramistral-7B-warm-instruct-GGUF-not-official
MagnusSa
"2024-06-24T07:32:06Z"
6,140
0
transformers
[ "transformers", "gguf", "noramistral, finetune, norwegian", "no", "nb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T17:22:08Z"
--- license: apache-2.0 language: - 'no' - nb library_name: transformers tags: - noramistral, finetune, norwegian --- # NoraMistral-7B-warm-instruct-GGUF - Original model: [NorMistral-7b-warm-instruct](https://huggingface.co/norallm/normistral-7b-warm-instruct) - Base model: - [NorMistral-7b-warm](https://huggingface.co/norallm/normistral-7b-warm) -- an LLM initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) This repo contains GGUF formatted files for the instruction model. <br> With the added pre-tokenizer `ggml-vocab-normistral-7b-warm.gguf`. <br> This should remove the warning you will otherwise get in `llama.cpp` with the original gguf files. ## ⚠️⚠️ To take advantage of this workaround, changes also needs to be made directly in [llama.cpp](https://github.com/ggerganov/llama.cpp). - See an example of this here: [GitHub](https://github.com/ggerganov/llama.cpp/commit/921e2c3e63e63f7c7efe168883ec2ac090f62c14) or clone the fork. - I recommend using the server provided by [llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) to get and OpenAI API endpoint. ## Norwegian Large Language Models (from original card) This is a model instruction-tuned on open datasets released under the most permissive apache-2.0 licence (in other words, we don't use any datasets generated by ChatGPT) — thus we can release this model under the same license and make it openly available for commercial applications. The model has been finetuned on 4096 context length, twice as many tokens as the base model. The released weights are still a work in progress and they might change in the future. This is the first iteration of instruction-tuning our NorMistral models and it currently uses only the SFT phase without any preference optimization. Please let us know your feedback to improve the model in future releases. ## How to run the model? ### 1. Prompt format NorMistral uses ChatML-like format for structuring the (multi-turn) conversations. An example of a prompt in this format looks like the following (notice the special `<|im_start|>` and `<|im_end|>` tokens). ``` <|im_start|> user Hva er hovedstaden i Norge?<|im_end|> <|im_start|> assistant Hovedstaden i Norge er Oslo. Denne byen ligger i den sørøstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populære turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berømte norske skulptøren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er også hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser.<|im_end|> <|im_start|> user Gi meg en liste over de beste stedene å besøke i hovedstaden<|im_end|> <|im_start|> assistant ``` ### How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for example. #### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/). #### First install the package Run one of the following commands, according to your system: ```shell # Base llama-ccp-python with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Directly from huggingface-hub (requires huggingface-hub to be installed) # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama.from_pretrained( repo_id="MagnusSa/noramistral-7B-warm-instruct-GGUF-not-official", # HuggingFace repository containing the GGUF files. filename="*Q4_K_M.gguf", # suffix of the filename containing the level of quantization. n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=33 # The number of layers to offload to GPU, if you have GPU acceleration available chat_format = "chatml" # The chat format that will be used for chat completions ) # Simple inference example output = llm( """<s><|im_start|> user Hva kan jeg bruke einstape til?<|im_end|> <|im_start|> assistant """, # Prompt max_tokens=512, # Generate up to 512 tokens stop=["<|im_end|>"], # Example stop token echo=True, # Whether to echo the prompt temperature=0.3 # Temperature to set, for Q3_K_M, Q4_K_M, Q5_K_M, and Q6_0 it is recommended to set it relatively low. ) # Chat Completion API llm.create_chat_completion( messages = [ { "role": "user", "content": "Hva kan jeg bruke einstape til?" } ] ) ```
sgugger/funnel-random-tiny
sgugger
"2021-04-08T19:31:32Z"
6,138
0
transformers
[ "transformers", "pytorch", "tf", "funnel", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
Entry not found
context-sbf/charm-xl
context-sbf
"2023-06-21T14:21:21Z"
6,137
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-06-21T14:16:11Z"
--- license: openrail ---
s-nlp/roberta-base-formality-ranker
s-nlp
"2023-09-08T08:44:43Z"
6,135
16
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "formality", "en", "dataset:GYAFC", "dataset:Pavlick-Tetreault-2016", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - en tags: - formality datasets: - GYAFC - Pavlick-Tetreault-2016 license: cc-by-nc-sa-4.0 --- The model has been trained to predict for English sentences, whether they are formal or informal. Base model: `roberta-base` Datasets: [GYAFC](https://github.com/raosudha89/GYAFC-corpus) from [Rao and Tetreault, 2018](https://aclanthology.org/N18-1012) and [online formality corpus](http://www.seas.upenn.edu/~nlp/resources/formality-corpus.tgz) from [Pavlick and Tetreault, 2016](https://aclanthology.org/Q16-1005). Data augmentation: changing texts to upper or lower case; removing all punctuation, adding dot at the end of a sentence. It was applied because otherwise the model is over-reliant on punctuation and capitalization and does not pay enough attention to other features. Loss: binary classification (on GYAFC), in-batch ranking (on PT data). Performance metrics on the test data: | dataset | ROC AUC | precision | recall | fscore | accuracy | Spearman | |----------------------------------------------|---------|-----------|--------|--------|----------|------------| | GYAFC | 0.9779 | 0.90 | 0.91 | 0.90 | 0.9087 | 0.8233 | | GYAFC normalized (lowercase + remove punct.) | 0.9234 | 0.85 | 0.81 | 0.82 | 0.8218 | 0.7294 | | P&T subset | Spearman R | | - | - | news | 0.4003 answers | 0.7500 blog | 0.7334 email | 0.7606 ## Citation If you are using the model in your research, please cite the following [paper](https://doi.org/10.1007/978-3-031-35320-8_4) where it was introduced: ``` @InProceedings{10.1007/978-3-031-35320-8_4, author="Babakov, Nikolay and Dale, David and Gusev, Ilya and Krotova, Irina and Panchenko, Alexander", editor="M{\'e}tais, Elisabeth and Meziane, Farid and Sugumaran, Vijayan and Manning, Warren and Reiff-Marganiec, Stephan", title="Don't Lose the Message While Paraphrasing: A Study on Content Preserving Style Transfer", booktitle="Natural Language Processing and Information Systems", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="47--61", isbn="978-3-031-35320-8" } ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
legraphista/Qwen2-7B-IMat-GGUF
legraphista
"2024-06-06T20:02:22Z"
6,134
0
gguf
[ "gguf", "pretrained", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:Qwen/Qwen2-7B", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-06T19:12:45Z"
--- base_model: Qwen/Qwen2-7B inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - pretrained - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # Qwen2-7B-IMat-GGUF _Llama.cpp imatrix quantization of Qwen/Qwen2-7B_ Original Model: [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3091](https://github.com/ggerganov/llama.cpp/releases/tag/b3091) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-7B.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q8_0.gguf) | Q8_0 | 8.10GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q6_K.gguf) | Q6_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q4_K.gguf) | Q4_K | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q3_K.gguf) | Q3_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q2_K.gguf) | Q2_K | 3.02GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-7B.BF16.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.BF16.gguf) | BF16 | 15.24GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.FP16.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.FP16.gguf) | F16 | 15.24GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q8_0.gguf) | Q8_0 | 8.10GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q6_K.gguf) | Q6_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q5_K.gguf) | Q5_K | 5.44GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q5_K_S.gguf) | Q5_K_S | 5.32GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-7B.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q4_K.gguf) | Q4_K | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q4_K_S.gguf) | Q4_K_S | 4.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ4_NL.gguf) | IQ4_NL | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ4_XS.gguf) | IQ4_XS | 4.22GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q3_K.gguf) | Q3_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q3_K_L.gguf) | Q3_K_L | 4.09GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q3_K_S.gguf) | Q3_K_S | 3.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ3_M.gguf) | IQ3_M | 3.57GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ3_S.gguf) | IQ3_S | 3.50GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ3_XS.gguf) | IQ3_XS | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q2_K.gguf) | Q2_K | 3.02GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.Q2_K_S.gguf) | Q2_K_S | 2.83GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ2_M.gguf) | IQ2_M | 2.78GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ2_S.gguf) | IQ2_S | 2.60GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ2_XS.gguf) | IQ2_XS | 2.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ2_XXS.gguf) | IQ2_XXS | 2.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ1_M.gguf) | IQ1_M | 2.04GB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-7B.IQ1_S.gguf](https://huggingface.co/legraphista/Qwen2-7B-IMat-GGUF/blob/main/Qwen2-7B.IQ1_S.gguf) | IQ1_S | 1.90GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Qwen2-7B-IMat-GGUF --include "Qwen2-7B.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Qwen2-7B-IMat-GGUF --include "Qwen2-7B.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Chat template with system prompt ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m Qwen2-7B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Qwen2-7B.Q8_0`) 3. Run `gguf-split --merge Qwen2-7B.Q8_0/Qwen2-7B.Q8_0-00001-of-XXXXX.gguf Qwen2-7B.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
jamesgpt1/sf_model_e5
jamesgpt1
"2023-11-14T15:47:11Z"
6,131
2
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "mteb", "model-index", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-11-09T23:12:19Z"
--- tags: - mteb model-index: - name: sf_model_e5 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.85074626865672 - type: ap value: 33.779217850079206 - type: f1 value: 64.96977487239377 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.80945 - type: ap value: 88.22978189506895 - type: f1 value: 91.7858219911604 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.94200000000001 - type: f1 value: 47.911934405973895 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 39.616 - type: map_at_10 value: 55.938 - type: map_at_100 value: 56.552 - type: map_at_1000 value: 56.556 - type: map_at_3 value: 51.754 - type: map_at_5 value: 54.623999999999995 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 56.452999999999996 - type: mrr_at_100 value: 57.053 - type: mrr_at_1000 value: 57.057 - type: mrr_at_3 value: 52.312000000000005 - type: mrr_at_5 value: 55.1 - type: ndcg_at_1 value: 39.616 - type: ndcg_at_10 value: 64.067 - type: ndcg_at_100 value: 66.384 - type: ndcg_at_1000 value: 66.468 - type: ndcg_at_3 value: 55.74 - type: ndcg_at_5 value: 60.889 - type: precision_at_1 value: 39.616 - type: precision_at_10 value: 8.953999999999999 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 15.946 - type: recall_at_1 value: 39.616 - type: recall_at_10 value: 89.545 - type: recall_at_100 value: 99.004 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 79.73 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.72923923743124 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.87449955203238 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.3214434754065 - type: mrr value: 77.87879787187265 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.82418607751953 - type: cos_sim_spearman value: 86.74535004562274 - type: euclidean_pearson value: 86.58792166831103 - type: euclidean_spearman value: 86.74535004562274 - type: manhattan_pearson value: 86.23957813056677 - type: manhattan_spearman value: 86.41522204150452 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.61363636363636 - type: f1 value: 83.98373241136187 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.73148995791471 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 37.23723038699733 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.217 - type: map_at_10 value: 43.453 - type: map_at_100 value: 45.038 - type: map_at_1000 value: 45.162 - type: map_at_3 value: 39.589 - type: map_at_5 value: 41.697 - type: mrr_at_1 value: 39.628 - type: mrr_at_10 value: 49.698 - type: mrr_at_100 value: 50.44 - type: mrr_at_1000 value: 50.482000000000006 - type: mrr_at_3 value: 46.781 - type: mrr_at_5 value: 48.548 - type: ndcg_at_1 value: 39.628 - type: ndcg_at_10 value: 50.158 - type: ndcg_at_100 value: 55.687 - type: ndcg_at_1000 value: 57.499 - type: ndcg_at_3 value: 44.594 - type: ndcg_at_5 value: 47.198 - type: precision_at_1 value: 39.628 - type: precision_at_10 value: 9.828000000000001 - type: precision_at_100 value: 1.591 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 21.507 - type: precision_at_5 value: 15.765 - type: recall_at_1 value: 32.217 - type: recall_at_10 value: 62.717999999999996 - type: recall_at_100 value: 85.992 - type: recall_at_1000 value: 97.271 - type: recall_at_3 value: 46.694 - type: recall_at_5 value: 53.952 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.862000000000002 - type: map_at_10 value: 41.287 - type: map_at_100 value: 42.526 - type: map_at_1000 value: 42.653999999999996 - type: map_at_3 value: 38.055 - type: map_at_5 value: 40.022000000000006 - type: mrr_at_1 value: 38.408 - type: mrr_at_10 value: 46.943 - type: mrr_at_100 value: 47.597 - type: mrr_at_1000 value: 47.64 - type: mrr_at_3 value: 44.607 - type: mrr_at_5 value: 46.079 - type: ndcg_at_1 value: 38.408 - type: ndcg_at_10 value: 46.936 - type: ndcg_at_100 value: 51.307 - type: ndcg_at_1000 value: 53.312000000000005 - type: ndcg_at_3 value: 42.579 - type: ndcg_at_5 value: 44.877 - type: precision_at_1 value: 38.408 - type: precision_at_10 value: 8.885 - type: precision_at_100 value: 1.4449999999999998 - type: precision_at_1000 value: 0.192 - type: precision_at_3 value: 20.616 - type: precision_at_5 value: 14.841 - type: recall_at_1 value: 30.862000000000002 - type: recall_at_10 value: 56.994 - type: recall_at_100 value: 75.347 - type: recall_at_1000 value: 87.911 - type: recall_at_3 value: 44.230000000000004 - type: recall_at_5 value: 50.625 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.076 - type: map_at_10 value: 52.535 - type: map_at_100 value: 53.537 - type: map_at_1000 value: 53.591 - type: map_at_3 value: 48.961 - type: map_at_5 value: 50.96000000000001 - type: mrr_at_1 value: 44.765 - type: mrr_at_10 value: 55.615 - type: mrr_at_100 value: 56.24 - type: mrr_at_1000 value: 56.264 - type: mrr_at_3 value: 52.925999999999995 - type: mrr_at_5 value: 54.493 - type: ndcg_at_1 value: 44.765 - type: ndcg_at_10 value: 58.777 - type: ndcg_at_100 value: 62.574 - type: ndcg_at_1000 value: 63.624 - type: ndcg_at_3 value: 52.81 - type: ndcg_at_5 value: 55.657999999999994 - type: precision_at_1 value: 44.765 - type: precision_at_10 value: 9.693 - type: precision_at_100 value: 1.248 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 23.866 - type: precision_at_5 value: 16.489 - type: recall_at_1 value: 39.076 - type: recall_at_10 value: 74.01299999999999 - type: recall_at_100 value: 90.363 - type: recall_at_1000 value: 97.782 - type: recall_at_3 value: 58.056 - type: recall_at_5 value: 65.029 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.357000000000003 - type: map_at_10 value: 35.492000000000004 - type: map_at_100 value: 36.504999999999995 - type: map_at_1000 value: 36.578 - type: map_at_3 value: 32.696999999999996 - type: map_at_5 value: 34.388999999999996 - type: mrr_at_1 value: 28.136 - type: mrr_at_10 value: 37.383 - type: mrr_at_100 value: 38.271 - type: mrr_at_1000 value: 38.324999999999996 - type: mrr_at_3 value: 34.782999999999994 - type: mrr_at_5 value: 36.416 - type: ndcg_at_1 value: 28.136 - type: ndcg_at_10 value: 40.741 - type: ndcg_at_100 value: 45.803 - type: ndcg_at_1000 value: 47.637 - type: ndcg_at_3 value: 35.412 - type: ndcg_at_5 value: 38.251000000000005 - type: precision_at_1 value: 28.136 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 15.254000000000001 - type: precision_at_5 value: 10.757 - type: recall_at_1 value: 26.357000000000003 - type: recall_at_10 value: 55.021 - type: recall_at_100 value: 78.501 - type: recall_at_1000 value: 92.133 - type: recall_at_3 value: 40.798 - type: recall_at_5 value: 47.591 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.302 - type: map_at_10 value: 26.365 - type: map_at_100 value: 27.581 - type: map_at_1000 value: 27.705999999999996 - type: map_at_3 value: 23.682 - type: map_at_5 value: 25.304 - type: mrr_at_1 value: 21.891 - type: mrr_at_10 value: 31.227 - type: mrr_at_100 value: 32.22 - type: mrr_at_1000 value: 32.282 - type: mrr_at_3 value: 28.711 - type: mrr_at_5 value: 30.314999999999998 - type: ndcg_at_1 value: 21.891 - type: ndcg_at_10 value: 31.965 - type: ndcg_at_100 value: 37.869 - type: ndcg_at_1000 value: 40.642 - type: ndcg_at_3 value: 27.184 - type: ndcg_at_5 value: 29.686 - type: precision_at_1 value: 21.891 - type: precision_at_10 value: 5.9830000000000005 - type: precision_at_100 value: 1.0250000000000001 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.801 - type: recall_at_1 value: 17.302 - type: recall_at_10 value: 44.312000000000005 - type: recall_at_100 value: 70.274 - type: recall_at_1000 value: 89.709 - type: recall_at_3 value: 31.117 - type: recall_at_5 value: 37.511 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.404000000000003 - type: map_at_10 value: 40.571 - type: map_at_100 value: 42.049 - type: map_at_1000 value: 42.156 - type: map_at_3 value: 37.413000000000004 - type: map_at_5 value: 39.206 - type: mrr_at_1 value: 36.285000000000004 - type: mrr_at_10 value: 46.213 - type: mrr_at_100 value: 47.129 - type: mrr_at_1000 value: 47.168 - type: mrr_at_3 value: 43.84 - type: mrr_at_5 value: 45.226 - type: ndcg_at_1 value: 36.285000000000004 - type: ndcg_at_10 value: 46.809 - type: ndcg_at_100 value: 52.615 - type: ndcg_at_1000 value: 54.538 - type: ndcg_at_3 value: 41.91 - type: ndcg_at_5 value: 44.224999999999994 - type: precision_at_1 value: 36.285000000000004 - type: precision_at_10 value: 8.527 - type: precision_at_100 value: 1.3259999999999998 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 20.083000000000002 - type: precision_at_5 value: 14.071 - type: recall_at_1 value: 29.404000000000003 - type: recall_at_10 value: 59.611999999999995 - type: recall_at_100 value: 83.383 - type: recall_at_1000 value: 95.703 - type: recall_at_3 value: 45.663 - type: recall_at_5 value: 51.971999999999994 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.317 - type: map_at_10 value: 35.217999999999996 - type: map_at_100 value: 36.665 - type: map_at_1000 value: 36.768 - type: map_at_3 value: 31.924000000000003 - type: map_at_5 value: 33.591 - type: mrr_at_1 value: 31.507 - type: mrr_at_10 value: 40.671 - type: mrr_at_100 value: 41.609 - type: mrr_at_1000 value: 41.657 - type: mrr_at_3 value: 38.261 - type: mrr_at_5 value: 39.431 - type: ndcg_at_1 value: 31.507 - type: ndcg_at_10 value: 41.375 - type: ndcg_at_100 value: 47.426 - type: ndcg_at_1000 value: 49.504 - type: ndcg_at_3 value: 35.989 - type: ndcg_at_5 value: 38.068000000000005 - type: precision_at_1 value: 31.507 - type: precision_at_10 value: 7.8420000000000005 - type: precision_at_100 value: 1.257 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 17.352 - type: precision_at_5 value: 12.328999999999999 - type: recall_at_1 value: 25.317 - type: recall_at_10 value: 54.254999999999995 - type: recall_at_100 value: 80.184 - type: recall_at_1000 value: 94.07 - type: recall_at_3 value: 39.117000000000004 - type: recall_at_5 value: 44.711 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.813000000000002 - type: map_at_10 value: 35.47183333333334 - type: map_at_100 value: 36.71775 - type: map_at_1000 value: 36.833000000000006 - type: map_at_3 value: 32.449916666666674 - type: map_at_5 value: 34.1235 - type: mrr_at_1 value: 30.766750000000005 - type: mrr_at_10 value: 39.77508333333334 - type: mrr_at_100 value: 40.64233333333333 - type: mrr_at_1000 value: 40.69658333333333 - type: mrr_at_3 value: 37.27349999999999 - type: mrr_at_5 value: 38.723416666666665 - type: ndcg_at_1 value: 30.766750000000005 - type: ndcg_at_10 value: 41.141416666666665 - type: ndcg_at_100 value: 46.42016666666666 - type: ndcg_at_1000 value: 48.61916666666667 - type: ndcg_at_3 value: 36.06883333333333 - type: ndcg_at_5 value: 38.43966666666666 - type: precision_at_1 value: 30.766750000000005 - type: precision_at_10 value: 7.340000000000001 - type: precision_at_100 value: 1.1796666666666666 - type: precision_at_1000 value: 0.15625 - type: precision_at_3 value: 16.763833333333334 - type: precision_at_5 value: 11.972166666666666 - type: recall_at_1 value: 25.813000000000002 - type: recall_at_10 value: 53.62741666666667 - type: recall_at_100 value: 76.70125000000002 - type: recall_at_1000 value: 91.85566666666666 - type: recall_at_3 value: 39.55075 - type: recall_at_5 value: 45.645250000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.249 - type: map_at_10 value: 31.095 - type: map_at_100 value: 32.056000000000004 - type: map_at_1000 value: 32.163000000000004 - type: map_at_3 value: 29.275000000000002 - type: map_at_5 value: 30.333 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 34.122 - type: mrr_at_100 value: 34.958 - type: mrr_at_1000 value: 35.039 - type: mrr_at_3 value: 32.541 - type: mrr_at_5 value: 33.43 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 35.248000000000005 - type: ndcg_at_100 value: 39.933 - type: ndcg_at_1000 value: 42.616 - type: ndcg_at_3 value: 31.980999999999998 - type: ndcg_at_5 value: 33.583 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.445 - type: precision_at_100 value: 0.848 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.957 - type: precision_at_5 value: 9.479 - type: recall_at_1 value: 23.249 - type: recall_at_10 value: 45.005 - type: recall_at_100 value: 66.175 - type: recall_at_1000 value: 86.116 - type: recall_at_3 value: 36.03 - type: recall_at_5 value: 40.037 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.592 - type: map_at_10 value: 25.003999999999998 - type: map_at_100 value: 26.208 - type: map_at_1000 value: 26.333000000000002 - type: map_at_3 value: 22.479 - type: map_at_5 value: 23.712 - type: mrr_at_1 value: 21.37 - type: mrr_at_10 value: 28.951999999999998 - type: mrr_at_100 value: 29.915999999999997 - type: mrr_at_1000 value: 29.99 - type: mrr_at_3 value: 26.503 - type: mrr_at_5 value: 27.728 - type: ndcg_at_1 value: 21.37 - type: ndcg_at_10 value: 29.944 - type: ndcg_at_100 value: 35.632000000000005 - type: ndcg_at_1000 value: 38.393 - type: ndcg_at_3 value: 25.263999999999996 - type: ndcg_at_5 value: 27.115000000000002 - type: precision_at_1 value: 21.37 - type: precision_at_10 value: 5.568 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 11.895 - type: precision_at_5 value: 8.61 - type: recall_at_1 value: 17.592 - type: recall_at_10 value: 40.976 - type: recall_at_100 value: 66.487 - type: recall_at_1000 value: 85.954 - type: recall_at_3 value: 27.797 - type: recall_at_5 value: 32.553 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.173000000000002 - type: map_at_10 value: 34.611999999999995 - type: map_at_100 value: 35.735 - type: map_at_1000 value: 35.842 - type: map_at_3 value: 31.345 - type: map_at_5 value: 33.123000000000005 - type: mrr_at_1 value: 29.570999999999998 - type: mrr_at_10 value: 38.775999999999996 - type: mrr_at_100 value: 39.621 - type: mrr_at_1000 value: 39.684000000000005 - type: mrr_at_3 value: 35.992000000000004 - type: mrr_at_5 value: 37.586999999999996 - type: ndcg_at_1 value: 29.570999999999998 - type: ndcg_at_10 value: 40.388000000000005 - type: ndcg_at_100 value: 45.59 - type: ndcg_at_1000 value: 47.948 - type: ndcg_at_3 value: 34.497 - type: ndcg_at_5 value: 37.201 - type: precision_at_1 value: 29.570999999999998 - type: precision_at_10 value: 6.931 - type: precision_at_100 value: 1.082 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 15.609 - type: precision_at_5 value: 11.286999999999999 - type: recall_at_1 value: 25.173000000000002 - type: recall_at_10 value: 53.949000000000005 - type: recall_at_100 value: 76.536 - type: recall_at_1000 value: 92.979 - type: recall_at_3 value: 37.987 - type: recall_at_5 value: 44.689 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.224 - type: map_at_10 value: 32.903 - type: map_at_100 value: 34.65 - type: map_at_1000 value: 34.873 - type: map_at_3 value: 29.673 - type: map_at_5 value: 31.361 - type: mrr_at_1 value: 30.435000000000002 - type: mrr_at_10 value: 38.677 - type: mrr_at_100 value: 39.805 - type: mrr_at_1000 value: 39.851 - type: mrr_at_3 value: 35.935 - type: mrr_at_5 value: 37.566 - type: ndcg_at_1 value: 30.435000000000002 - type: ndcg_at_10 value: 39.012 - type: ndcg_at_100 value: 45.553 - type: ndcg_at_1000 value: 47.919 - type: ndcg_at_3 value: 33.809 - type: ndcg_at_5 value: 36.120999999999995 - type: precision_at_1 value: 30.435000000000002 - type: precision_at_10 value: 7.628 - type: precision_at_100 value: 1.5810000000000002 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 15.744 - type: precision_at_5 value: 11.66 - type: recall_at_1 value: 24.224 - type: recall_at_10 value: 50.009 - type: recall_at_100 value: 78.839 - type: recall_at_1000 value: 93.71300000000001 - type: recall_at_3 value: 35.512 - type: recall_at_5 value: 41.541 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.983 - type: map_at_10 value: 27.127000000000002 - type: map_at_100 value: 28.063 - type: map_at_1000 value: 28.17 - type: map_at_3 value: 24.306 - type: map_at_5 value: 25.784000000000002 - type: mrr_at_1 value: 20.518 - type: mrr_at_10 value: 29.024 - type: mrr_at_100 value: 29.902 - type: mrr_at_1000 value: 29.976999999999997 - type: mrr_at_3 value: 26.401999999999997 - type: mrr_at_5 value: 27.862 - type: ndcg_at_1 value: 20.518 - type: ndcg_at_10 value: 32.344 - type: ndcg_at_100 value: 37.053000000000004 - type: ndcg_at_1000 value: 39.798 - type: ndcg_at_3 value: 26.796999999999997 - type: ndcg_at_5 value: 29.293000000000003 - type: precision_at_1 value: 20.518 - type: precision_at_10 value: 5.434 - type: precision_at_100 value: 0.83 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 11.892 - type: precision_at_5 value: 8.577 - type: recall_at_1 value: 18.983 - type: recall_at_10 value: 46.665 - type: recall_at_100 value: 68.33399999999999 - type: recall_at_1000 value: 88.927 - type: recall_at_3 value: 31.608000000000004 - type: recall_at_5 value: 37.532 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 11.200000000000001 - type: map_at_10 value: 20.241999999999997 - type: map_at_100 value: 22.357 - type: map_at_1000 value: 22.556 - type: map_at_3 value: 16.564999999999998 - type: map_at_5 value: 18.443 - type: mrr_at_1 value: 25.277 - type: mrr_at_10 value: 37.582 - type: mrr_at_100 value: 38.525999999999996 - type: mrr_at_1000 value: 38.564 - type: mrr_at_3 value: 33.898 - type: mrr_at_5 value: 36.191 - type: ndcg_at_1 value: 25.277 - type: ndcg_at_10 value: 28.74 - type: ndcg_at_100 value: 36.665 - type: ndcg_at_1000 value: 40.08 - type: ndcg_at_3 value: 22.888 - type: ndcg_at_5 value: 25.081999999999997 - type: precision_at_1 value: 25.277 - type: precision_at_10 value: 9.251 - type: precision_at_100 value: 1.773 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 17.329 - type: precision_at_5 value: 13.746 - type: recall_at_1 value: 11.200000000000001 - type: recall_at_10 value: 35.419 - type: recall_at_100 value: 62.41 - type: recall_at_1000 value: 81.467 - type: recall_at_3 value: 21.275 - type: recall_at_5 value: 27.201999999999998 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.396 - type: map_at_10 value: 20.735 - type: map_at_100 value: 30.098000000000003 - type: map_at_1000 value: 31.866 - type: map_at_3 value: 14.71 - type: map_at_5 value: 17.259 - type: mrr_at_1 value: 70.25 - type: mrr_at_10 value: 77.09700000000001 - type: mrr_at_100 value: 77.398 - type: mrr_at_1000 value: 77.40899999999999 - type: mrr_at_3 value: 75.542 - type: mrr_at_5 value: 76.354 - type: ndcg_at_1 value: 57.75 - type: ndcg_at_10 value: 42.509 - type: ndcg_at_100 value: 48.94 - type: ndcg_at_1000 value: 56.501000000000005 - type: ndcg_at_3 value: 46.827000000000005 - type: ndcg_at_5 value: 44.033 - type: precision_at_1 value: 70.25 - type: precision_at_10 value: 33.85 - type: precision_at_100 value: 11.373 - type: precision_at_1000 value: 2.136 - type: precision_at_3 value: 50.917 - type: precision_at_5 value: 42.8 - type: recall_at_1 value: 9.396 - type: recall_at_10 value: 26.472 - type: recall_at_100 value: 57.30800000000001 - type: recall_at_1000 value: 80.983 - type: recall_at_3 value: 15.859000000000002 - type: recall_at_5 value: 19.758 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 54.900000000000006 - type: f1 value: 48.14707395235448 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 66.369 - type: map_at_10 value: 76.708 - type: map_at_100 value: 76.981 - type: map_at_1000 value: 76.995 - type: map_at_3 value: 75.114 - type: map_at_5 value: 76.116 - type: mrr_at_1 value: 71.557 - type: mrr_at_10 value: 80.95 - type: mrr_at_100 value: 81.075 - type: mrr_at_1000 value: 81.07900000000001 - type: mrr_at_3 value: 79.728 - type: mrr_at_5 value: 80.522 - type: ndcg_at_1 value: 71.557 - type: ndcg_at_10 value: 81.381 - type: ndcg_at_100 value: 82.421 - type: ndcg_at_1000 value: 82.709 - type: ndcg_at_3 value: 78.671 - type: ndcg_at_5 value: 80.17 - type: precision_at_1 value: 71.557 - type: precision_at_10 value: 10.159 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.668 - type: precision_at_5 value: 19.337 - type: recall_at_1 value: 66.369 - type: recall_at_10 value: 91.482 - type: recall_at_100 value: 95.848 - type: recall_at_1000 value: 97.749 - type: recall_at_3 value: 84.185 - type: recall_at_5 value: 87.908 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 20.902 - type: map_at_10 value: 34.554 - type: map_at_100 value: 36.632 - type: map_at_1000 value: 36.811 - type: map_at_3 value: 30.264000000000003 - type: map_at_5 value: 32.714999999999996 - type: mrr_at_1 value: 42.13 - type: mrr_at_10 value: 51.224000000000004 - type: mrr_at_100 value: 52.044999999999995 - type: mrr_at_1000 value: 52.075 - type: mrr_at_3 value: 48.842999999999996 - type: mrr_at_5 value: 50.108 - type: ndcg_at_1 value: 42.13 - type: ndcg_at_10 value: 42.643 - type: ndcg_at_100 value: 49.806 - type: ndcg_at_1000 value: 52.583 - type: ndcg_at_3 value: 38.927 - type: ndcg_at_5 value: 40.071 - type: precision_at_1 value: 42.13 - type: precision_at_10 value: 11.928999999999998 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 26.337 - type: precision_at_5 value: 19.29 - type: recall_at_1 value: 20.902 - type: recall_at_10 value: 49.527 - type: recall_at_100 value: 75.754 - type: recall_at_1000 value: 92.171 - type: recall_at_3 value: 35.024 - type: recall_at_5 value: 41.207 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.831 - type: map_at_10 value: 63.958999999999996 - type: map_at_100 value: 64.869 - type: map_at_1000 value: 64.924 - type: map_at_3 value: 60.25 - type: map_at_5 value: 62.572 - type: mrr_at_1 value: 79.662 - type: mrr_at_10 value: 85.57900000000001 - type: mrr_at_100 value: 85.744 - type: mrr_at_1000 value: 85.748 - type: mrr_at_3 value: 84.718 - type: mrr_at_5 value: 85.312 - type: ndcg_at_1 value: 79.662 - type: ndcg_at_10 value: 72.366 - type: ndcg_at_100 value: 75.42999999999999 - type: ndcg_at_1000 value: 76.469 - type: ndcg_at_3 value: 67.258 - type: ndcg_at_5 value: 70.14099999999999 - type: precision_at_1 value: 79.662 - type: precision_at_10 value: 15.254999999999999 - type: precision_at_100 value: 1.763 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 43.358000000000004 - type: precision_at_5 value: 28.288999999999998 - type: recall_at_1 value: 39.831 - type: recall_at_10 value: 76.273 - type: recall_at_100 value: 88.163 - type: recall_at_1000 value: 95.017 - type: recall_at_3 value: 65.037 - type: recall_at_5 value: 70.722 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 93.13879999999999 - type: ap value: 89.94638859649079 - type: f1 value: 93.13371537570421 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.482 - type: map_at_10 value: 33.635999999999996 - type: map_at_100 value: 34.792 - type: map_at_1000 value: 34.839999999999996 - type: map_at_3 value: 29.553 - type: map_at_5 value: 31.892 - type: mrr_at_1 value: 22.076999999999998 - type: mrr_at_10 value: 34.247 - type: mrr_at_100 value: 35.337 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 30.208000000000002 - type: mrr_at_5 value: 32.554 - type: ndcg_at_1 value: 22.092 - type: ndcg_at_10 value: 40.657 - type: ndcg_at_100 value: 46.251999999999995 - type: ndcg_at_1000 value: 47.466 - type: ndcg_at_3 value: 32.353 - type: ndcg_at_5 value: 36.532 - type: precision_at_1 value: 22.092 - type: precision_at_10 value: 6.5040000000000004 - type: precision_at_100 value: 0.9329999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.719999999999999 - type: precision_at_5 value: 10.344000000000001 - type: recall_at_1 value: 21.482 - type: recall_at_10 value: 62.316 - type: recall_at_100 value: 88.283 - type: recall_at_1000 value: 97.554 - type: recall_at_3 value: 39.822 - type: recall_at_5 value: 49.805 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.63657090743274 - type: f1 value: 93.49355466580484 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.01459188326493 - type: f1 value: 48.48386472180784 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 71.8750196914349 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.38063214525891 - type: f1 value: 76.87364042122763 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.30572302322684 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.18418556367587 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.268707296386154 - type: mrr value: 33.481925531215055 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.586 - type: map_at_10 value: 14.954999999999998 - type: map_at_100 value: 19.03 - type: map_at_1000 value: 20.653 - type: map_at_3 value: 10.859 - type: map_at_5 value: 12.577 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.57 - type: mrr_at_100 value: 58.050000000000004 - type: mrr_at_1000 value: 58.083 - type: mrr_at_3 value: 55.212 - type: mrr_at_5 value: 56.713 - type: ndcg_at_1 value: 45.975 - type: ndcg_at_10 value: 38.432 - type: ndcg_at_100 value: 35.287 - type: ndcg_at_1000 value: 44.35 - type: ndcg_at_3 value: 43.077 - type: ndcg_at_5 value: 40.952 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 28.483000000000004 - type: precision_at_100 value: 8.882 - type: precision_at_1000 value: 2.217 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.17 - type: recall_at_1 value: 6.586 - type: recall_at_10 value: 19.688 - type: recall_at_100 value: 35.426 - type: recall_at_1000 value: 68.09100000000001 - type: recall_at_3 value: 12.234 - type: recall_at_5 value: 14.937000000000001 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 27.322000000000003 - type: map_at_10 value: 43.224000000000004 - type: map_at_100 value: 44.275999999999996 - type: map_at_1000 value: 44.308 - type: map_at_3 value: 38.239000000000004 - type: map_at_5 value: 41.244 - type: mrr_at_1 value: 31.025000000000002 - type: mrr_at_10 value: 45.635 - type: mrr_at_100 value: 46.425 - type: mrr_at_1000 value: 46.445 - type: mrr_at_3 value: 41.42 - type: mrr_at_5 value: 44.038 - type: ndcg_at_1 value: 30.997000000000003 - type: ndcg_at_10 value: 51.55499999999999 - type: ndcg_at_100 value: 55.964999999999996 - type: ndcg_at_1000 value: 56.657000000000004 - type: ndcg_at_3 value: 42.185 - type: ndcg_at_5 value: 47.229 - type: precision_at_1 value: 30.997000000000003 - type: precision_at_10 value: 8.885 - type: precision_at_100 value: 1.1360000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.457 - type: precision_at_5 value: 14.554 - type: recall_at_1 value: 27.322000000000003 - type: recall_at_10 value: 74.59400000000001 - type: recall_at_100 value: 93.699 - type: recall_at_1000 value: 98.76599999999999 - type: recall_at_3 value: 50.43 - type: recall_at_5 value: 62.073 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.109 - type: map_at_10 value: 85.137 - type: map_at_100 value: 85.759 - type: map_at_1000 value: 85.774 - type: map_at_3 value: 82.25200000000001 - type: map_at_5 value: 84.031 - type: mrr_at_1 value: 82.01 - type: mrr_at_10 value: 87.97 - type: mrr_at_100 value: 88.076 - type: mrr_at_1000 value: 88.076 - type: mrr_at_3 value: 87.06 - type: mrr_at_5 value: 87.694 - type: ndcg_at_1 value: 81.99 - type: ndcg_at_10 value: 88.738 - type: ndcg_at_100 value: 89.928 - type: ndcg_at_1000 value: 90.01400000000001 - type: ndcg_at_3 value: 86.042 - type: ndcg_at_5 value: 87.505 - type: precision_at_1 value: 81.99 - type: precision_at_10 value: 13.468 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.702999999999996 - type: precision_at_5 value: 24.706 - type: recall_at_1 value: 71.109 - type: recall_at_10 value: 95.58 - type: recall_at_100 value: 99.62299999999999 - type: recall_at_1000 value: 99.98899999999999 - type: recall_at_3 value: 87.69 - type: recall_at_5 value: 91.982 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 59.43361510023748 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.53582642500159 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.2299999999999995 - type: map_at_10 value: 11.802 - type: map_at_100 value: 14.454 - type: map_at_1000 value: 14.865 - type: map_at_3 value: 7.911 - type: map_at_5 value: 9.912 - type: mrr_at_1 value: 21.0 - type: mrr_at_10 value: 32.722 - type: mrr_at_100 value: 33.989000000000004 - type: mrr_at_1000 value: 34.026 - type: mrr_at_3 value: 28.65 - type: mrr_at_5 value: 31.075000000000003 - type: ndcg_at_1 value: 21.0 - type: ndcg_at_10 value: 20.161 - type: ndcg_at_100 value: 30.122 - type: ndcg_at_1000 value: 36.399 - type: ndcg_at_3 value: 17.881 - type: ndcg_at_5 value: 16.439999999999998 - type: precision_at_1 value: 21.0 - type: precision_at_10 value: 10.94 - type: precision_at_100 value: 2.5340000000000003 - type: precision_at_1000 value: 0.402 - type: precision_at_3 value: 17.067 - type: precision_at_5 value: 15.120000000000001 - type: recall_at_1 value: 4.2299999999999995 - type: recall_at_10 value: 22.163 - type: recall_at_100 value: 51.42 - type: recall_at_1000 value: 81.652 - type: recall_at_3 value: 10.353 - type: recall_at_5 value: 15.323 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 86.44056731476951 - type: cos_sim_spearman value: 82.32974396072802 - type: euclidean_pearson value: 83.63616080755894 - type: euclidean_spearman value: 82.32974071069209 - type: manhattan_pearson value: 83.64149958303744 - type: manhattan_spearman value: 82.32161014878858 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.65083720426293 - type: cos_sim_spearman value: 77.60786500521749 - type: euclidean_pearson value: 81.8149634918642 - type: euclidean_spearman value: 77.60637450428892 - type: manhattan_pearson value: 81.83507575657566 - type: manhattan_spearman value: 77.613220311151 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 87.35683624595698 - type: cos_sim_spearman value: 87.94550696434106 - type: euclidean_pearson value: 87.50272679030367 - type: euclidean_spearman value: 87.94550696434106 - type: manhattan_pearson value: 87.4759786099497 - type: manhattan_spearman value: 87.90226811166427 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.27438743391316 - type: cos_sim_spearman value: 83.85378984594779 - type: euclidean_pearson value: 85.25840635223642 - type: euclidean_spearman value: 83.85378983163673 - type: manhattan_pearson value: 85.24936075631025 - type: manhattan_spearman value: 83.85052479958138 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.4783814521557 - type: cos_sim_spearman value: 88.473284566453 - type: euclidean_pearson value: 87.94757741870404 - type: euclidean_spearman value: 88.47327698999878 - type: manhattan_pearson value: 87.93617414057984 - type: manhattan_spearman value: 88.45889274229359 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.68359147631057 - type: cos_sim_spearman value: 86.46426572535646 - type: euclidean_pearson value: 85.98303971468599 - type: euclidean_spearman value: 86.46426572535646 - type: manhattan_pearson value: 85.95109710640726 - type: manhattan_spearman value: 86.43282632541583 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.88758959688604 - type: cos_sim_spearman value: 88.70384784133324 - type: euclidean_pearson value: 89.27293800474978 - type: euclidean_spearman value: 88.70384784133324 - type: manhattan_pearson value: 89.41494348093664 - type: manhattan_spearman value: 88.8330050824941 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.66759812551814 - type: cos_sim_spearman value: 68.02368115471576 - type: euclidean_pearson value: 69.52859542757353 - type: euclidean_spearman value: 68.02368115471576 - type: manhattan_pearson value: 69.50332399468952 - type: manhattan_spearman value: 67.91228681203849 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.75891320010409 - type: cos_sim_spearman value: 88.33063922402347 - type: euclidean_pearson value: 88.02964654543274 - type: euclidean_spearman value: 88.33063922402347 - type: manhattan_pearson value: 88.03029440701458 - type: manhattan_spearman value: 88.3158691488696 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.46897310470844 - type: mrr value: 96.29042072669523 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 62.261 - type: map_at_10 value: 71.023 - type: map_at_100 value: 71.5 - type: map_at_1000 value: 71.518 - type: map_at_3 value: 67.857 - type: map_at_5 value: 69.44500000000001 - type: mrr_at_1 value: 65.0 - type: mrr_at_10 value: 72.11 - type: mrr_at_100 value: 72.479 - type: mrr_at_1000 value: 72.49600000000001 - type: mrr_at_3 value: 69.722 - type: mrr_at_5 value: 71.02199999999999 - type: ndcg_at_1 value: 65.0 - type: ndcg_at_10 value: 75.40599999999999 - type: ndcg_at_100 value: 77.41 - type: ndcg_at_1000 value: 77.83200000000001 - type: ndcg_at_3 value: 69.95599999999999 - type: ndcg_at_5 value: 72.296 - type: precision_at_1 value: 65.0 - type: precision_at_10 value: 9.966999999999999 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.667 - type: precision_at_5 value: 17.666999999999998 - type: recall_at_1 value: 62.261 - type: recall_at_10 value: 87.822 - type: recall_at_100 value: 96.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 73.06099999999999 - type: recall_at_5 value: 78.88300000000001 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86138613861387 - type: cos_sim_ap value: 96.7851799601876 - type: cos_sim_f1 value: 92.94354838709677 - type: cos_sim_precision value: 93.69918699186992 - type: cos_sim_recall value: 92.2 - type: dot_accuracy value: 99.86138613861387 - type: dot_ap value: 96.78517996018759 - type: dot_f1 value: 92.94354838709677 - type: dot_precision value: 93.69918699186992 - type: dot_recall value: 92.2 - type: euclidean_accuracy value: 99.86138613861387 - type: euclidean_ap value: 96.78517996018759 - type: euclidean_f1 value: 92.94354838709677 - type: euclidean_precision value: 93.69918699186992 - type: euclidean_recall value: 92.2 - type: manhattan_accuracy value: 99.86336633663366 - type: manhattan_ap value: 96.79790073128503 - type: manhattan_f1 value: 93.0930930930931 - type: manhattan_precision value: 93.18637274549098 - type: manhattan_recall value: 93.0 - type: max_accuracy value: 99.86336633663366 - type: max_ap value: 96.79790073128503 - type: max_f1 value: 93.0930930930931 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.07696952556874 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.51701116515262 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.40099299306496 - type: mrr value: 56.411316420507596 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.940008734510055 - type: cos_sim_spearman value: 31.606997026865212 - type: dot_pearson value: 30.940010256206353 - type: dot_spearman value: 31.62194110302714 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.197 - type: map_at_10 value: 1.6549999999999998 - type: map_at_100 value: 8.939 - type: map_at_1000 value: 22.402 - type: map_at_3 value: 0.587 - type: map_at_5 value: 0.931 - type: mrr_at_1 value: 74.0 - type: mrr_at_10 value: 84.667 - type: mrr_at_100 value: 84.667 - type: mrr_at_1000 value: 84.667 - type: mrr_at_3 value: 83.667 - type: mrr_at_5 value: 84.667 - type: ndcg_at_1 value: 69.0 - type: ndcg_at_10 value: 66.574 - type: ndcg_at_100 value: 51.074 - type: ndcg_at_1000 value: 47.263 - type: ndcg_at_3 value: 71.95 - type: ndcg_at_5 value: 70.52000000000001 - type: precision_at_1 value: 74.0 - type: precision_at_10 value: 70.39999999999999 - type: precision_at_100 value: 52.580000000000005 - type: precision_at_1000 value: 20.93 - type: precision_at_3 value: 76.667 - type: precision_at_5 value: 75.6 - type: recall_at_1 value: 0.197 - type: recall_at_10 value: 1.92 - type: recall_at_100 value: 12.655 - type: recall_at_1000 value: 44.522 - type: recall_at_3 value: 0.639 - type: recall_at_5 value: 1.03 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.735 - type: map_at_10 value: 9.064 - type: map_at_100 value: 15.021999999999998 - type: map_at_1000 value: 16.596 - type: map_at_3 value: 4.188 - type: map_at_5 value: 6.194999999999999 - type: mrr_at_1 value: 26.531 - type: mrr_at_10 value: 44.413000000000004 - type: mrr_at_100 value: 45.433 - type: mrr_at_1000 value: 45.452999999999996 - type: mrr_at_3 value: 41.497 - type: mrr_at_5 value: 42.925000000000004 - type: ndcg_at_1 value: 22.448999999999998 - type: ndcg_at_10 value: 22.597 - type: ndcg_at_100 value: 34.893 - type: ndcg_at_1000 value: 46.763 - type: ndcg_at_3 value: 24.366 - type: ndcg_at_5 value: 23.959 - type: precision_at_1 value: 26.531 - type: precision_at_10 value: 21.02 - type: precision_at_100 value: 7.51 - type: precision_at_1000 value: 1.541 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 25.306 - type: recall_at_1 value: 1.735 - type: recall_at_10 value: 15.870999999999999 - type: recall_at_100 value: 47.385 - type: recall_at_1000 value: 83.55 - type: recall_at_3 value: 5.813 - type: recall_at_5 value: 9.707 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.19 - type: ap value: 15.106812062408629 - type: f1 value: 55.254852511954255 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.553480475382 - type: f1 value: 61.697424438626435 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.12092298453447 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35173153722357 - type: cos_sim_ap value: 78.22985044080261 - type: cos_sim_f1 value: 71.23356926188069 - type: cos_sim_precision value: 68.36487142163999 - type: cos_sim_recall value: 74.35356200527704 - type: dot_accuracy value: 87.35173153722357 - type: dot_ap value: 78.22985958574529 - type: dot_f1 value: 71.23356926188069 - type: dot_precision value: 68.36487142163999 - type: dot_recall value: 74.35356200527704 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 78.22985909816191 - type: euclidean_f1 value: 71.23356926188069 - type: euclidean_precision value: 68.36487142163999 - type: euclidean_recall value: 74.35356200527704 - type: manhattan_accuracy value: 87.36365261965786 - type: manhattan_ap value: 78.18108280854142 - type: manhattan_f1 value: 71.19958634953466 - type: manhattan_precision value: 69.79219462747086 - type: manhattan_recall value: 72.66490765171504 - type: max_accuracy value: 87.36365261965786 - type: max_ap value: 78.22985958574529 - type: max_f1 value: 71.23356926188069 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.71424690495596 - type: cos_sim_ap value: 85.53000600450122 - type: cos_sim_f1 value: 77.95508274231679 - type: cos_sim_precision value: 74.92189718829879 - type: cos_sim_recall value: 81.24422543886665 - type: dot_accuracy value: 88.71424690495596 - type: dot_ap value: 85.53000387261983 - type: dot_f1 value: 77.95508274231679 - type: dot_precision value: 74.92189718829879 - type: dot_recall value: 81.24422543886665 - type: euclidean_accuracy value: 88.71424690495596 - type: euclidean_ap value: 85.53000527321076 - type: euclidean_f1 value: 77.95508274231679 - type: euclidean_precision value: 74.92189718829879 - type: euclidean_recall value: 81.24422543886665 - type: manhattan_accuracy value: 88.7297706368611 - type: manhattan_ap value: 85.49670114967172 - type: manhattan_f1 value: 77.91265729089562 - type: manhattan_precision value: 75.01425313568986 - type: manhattan_recall value: 81.04404065291038 - type: max_accuracy value: 88.7297706368611 - type: max_ap value: 85.53000600450122 - type: max_f1 value: 77.95508274231679 --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1196 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 598, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
timm/vit_tiny_patch16_224.augreg_in21k
timm
"2023-05-06T00:29:58Z"
6,130
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-22T07:55:55Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for vit_tiny_patch16_224.augreg_in21k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 9.7 - GMACs: 1.1 - Activations (M): 4.1 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_tiny_patch16_224.augreg_in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_tiny_patch16_224.augreg_in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 192) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Yntec/AnythingV4-768
Yntec
"2024-05-15T10:06:18Z"
6,129
2
diffusers
[ "diffusers", "safetensors", "anime", "general", "andite", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-11-12T12:50:01Z"
--- language: - en license: creativeml-openrail-m tags: - anime - general - andite - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Anything V4 768x768 version of this model with the MoistMix V2 VAE baked in for the Inference API. Original page: https://huggingface.co/xyn-ai/anything-v4.0 Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/sp7Nk14WSgNpAn2zkA6mi.png) Realistic girl standing. Very cute anime faces, chibi art, flawless, painting by gaston bussiere, charles sillem lidderdale. perfect face, full body, baby, masterpiece, highest quality, 1girl, blue eyes, sweater, Pretty CUTE GIRL, skirt, highly detailed
weqweasdas/RM-Gemma-2B
weqweasdas
"2024-03-22T02:39:22Z"
6,126
13
transformers
[ "transformers", "safetensors", "gemma", "text-classification", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-02-25T01:56:25Z"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Reward Model Overview <!-- Provide a quick summary of what the model is/does. --> The reward model is trained from the base model [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it). See the 7B version [RM-Gemma-7B](https://huggingface.co/weqweasdas/RM-Gemma-7B). The training script is available at https://github.com/WeiXiongUST/RLHF-Reward-Modeling . ## Model Details If you have any question with this reward model and also any question about reward modeling, feel free to drop me an email with [email protected]. I would be happy to chat! ### Dataset preprocessing <!-- Provide a longer summary of what this model is. --> The model is trained on a mixture of - [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [SHP](https://huggingface.co/datasets/stanfordnlp/SHP) - [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) - [Capybara](argilla/distilabel-capybara-dpo-7k-binarized) - [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) - [Orca](argilla/distilabel-intel-orca-dpo-pairs) The total number of the comparison pairs is 250K, where we perform the following data selection and cleaning strateges: - HH-RLHF: we use all the base, rejection sampling, and online subsets but delete the samples whose chosen == rejected, leading to 115547; - SHP: we only use the samples with score ratio > 2, for each prompt, we only take 1 comparison, leading to 55916; - Ultrafeedback: similar to [UltraFeedback-Binarized](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned), we use the fine-grained score instead of the overall one to rank samples. Meanwhile, for each prompt, we take the best one v.s. random chosen one in the remaining samples. Finally, we delete the selected pairs with equal scores, leading to 62793. - HelpSteer: we use the mean of helpfulness and correctness to rank samples. Meanwhile, we take the best sample v.s. the random chosen one in the remaining samples. Finally, we delete the selected pairs with equal scores, leading to 8206; - Capybara: we delete the pairs whose chosen and rejected samples are of the same rating, leading to 7562; - Orca: we delete the pairs whose chosen and rejected samples are of the same rating, leading to 6405. ### Training We train the model for one epoch with a learning rate of 1e-5, batch size 256, cosine learning rate decay with a warmup ratio 0.03. We present the training curve as follows. ![Training Loss](training_curve.png) ## Uses ```python from transformers import AutoTokenizer, pipeline rm_tokenizer = AutoTokenizer.from_pretrained("weqweasdas/RM-Gemma-2B") device = 0 # accelerator.device rm_pipe = pipeline( "sentiment-analysis", model="weqweasdas/RM-Gemma-2B", #device="auto", device=device, tokenizer=rm_tokenizer, model_kwargs={"torch_dtype": torch.bfloat16} ) pipe_kwargs = { "return_all_scores": True, "function_to_apply": "none", "batch_size": 1 } chat = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] test_texts = [tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(tokenizer.bos_token, "")] pipe_outputs = rm_pipe(test_texts, **pipe_kwargs) rewards = [output[0]["score"] for output in pipe_outputs] ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Results We collect the existing preference datasets and use them as a benchmark to evaluate the resulting reawrd model. Note that for MT-Bench dataset (lmsys/mt_bench_human_judgments), we delete the samples with tie as the comparison results. The Alpaca data is from [Here](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/tree/main). | Model/Test set | HH-RLHF-Helpful | SHP | Helpsteer helpful + correctness | Helpsteer All | MT Bench Human | MT Bench GPT4 | Alpaca Human | Alpaca GPT4 | Alpca Human-crossed | | :------------: | :-------------: | -------- | :-----------------------------: | :-----------: | :------------: | :-----------: | :----------: | :---------: | :-----------------: | | UltraRM-13B | **0.71** | **0.73** | 0.72 | **0.72** | **0.78** | **0.9** | **0.65** | **0.83** | **0.62** | | Pair-RM | 0.65 | 0.56 | 0.62 | 0.6 | 0.74 | 0.82 | 0.62 | 0.75 | 0.59 | | RM-Gemma-2B | 0.68 | **0.73** | 0.68 | **0.72** | 0.77 | 0.87 | 0.63 | 0.78 | 0.59 | ## Reference <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> To be added. The reward model may be readily used for rejection sampling finetuning ( ``` @article{dong2023raft, title={Raft: Reward ranked finetuning for generative foundation model alignment}, author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong}, journal={arXiv preprint arXiv:2304.06767}, year={2023} } ```
mixedbread-ai/mxbai-colbert-large-v1
mixedbread-ai
"2024-04-03T16:10:50Z"
6,125
45
transformers
[ "transformers", "onnx", "safetensors", "bert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-18T10:58:58Z"
--- license: apache-2.0 --- <br><br> <p align="center"> <svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" viewBox="0 0 2020 1130" width="150" height="150" aria-hidden="true"><path fill="#e95a0f" d="M398.167 621.992c-1.387-20.362-4.092-40.739-3.851-61.081.355-30.085 6.873-59.139 21.253-85.976 10.487-19.573 24.09-36.822 40.662-51.515 16.394-14.535 34.338-27.046 54.336-36.182 15.224-6.955 31.006-12.609 47.829-14.168 11.809-1.094 23.753-2.514 35.524-1.836 23.033 1.327 45.131 7.255 66.255 16.75 16.24 7.3 31.497 16.165 45.651 26.969 12.997 9.921 24.412 21.37 34.158 34.509 11.733 15.817 20.849 33.037 25.987 52.018 3.468 12.81 6.438 25.928 7.779 39.097 1.722 16.908 1.642 34.003 2.235 51.021.427 12.253.224 24.547 1.117 36.762 1.677 22.93 4.062 45.764 11.8 67.7 5.376 15.239 12.499 29.55 20.846 43.681l-18.282 20.328c-1.536 1.71-2.795 3.665-4.254 5.448l-19.323 23.533c-13.859-5.449-27.446-11.803-41.657-16.086-13.622-4.106-27.793-6.765-41.905-8.775-15.256-2.173-30.701-3.475-46.105-4.049-23.571-.879-47.178-1.056-70.769-1.029-10.858.013-21.723 1.116-32.57 1.926-5.362.4-10.69 1.255-16.464 1.477-2.758-7.675-5.284-14.865-7.367-22.181-3.108-10.92-4.325-22.554-13.16-31.095-2.598-2.512-5.069-5.341-6.883-8.443-6.366-10.884-12.48-21.917-18.571-32.959-4.178-7.573-8.411-14.375-17.016-18.559-10.34-5.028-19.538-12.387-29.311-18.611-3.173-2.021-6.414-4.312-9.952-5.297-5.857-1.63-11.98-2.301-17.991-3.376z"></path><path fill="#ed6d7b" d="M1478.998 758.842c-12.025.042-24.05.085-36.537-.373-.14-8.536.231-16.569.453-24.607.033-1.179-.315-2.986-1.081-3.4-.805-.434-2.376.338-3.518.81-.856.354-1.562 1.069-3.589 2.521-.239-3.308-.664-5.586-.519-7.827.488-7.544 2.212-15.166 1.554-22.589-1.016-11.451 1.397-14.592-12.332-14.419-3.793.048-3.617-2.803-3.332-5.331.499-4.422 1.45-8.803 1.77-13.233.311-4.316.068-8.672.068-12.861-2.554-.464-4.326-.86-6.12-1.098-4.415-.586-6.051-2.251-5.065-7.31 1.224-6.279.848-12.862 1.276-19.306.19-2.86-.971-4.473-3.794-4.753-4.113-.407-8.242-1.057-12.352-.975-4.663.093-5.192-2.272-4.751-6.012.733-6.229 1.252-12.483 1.875-18.726l1.102-10.495c-5.905-.309-11.146-.805-16.385-.778-3.32.017-5.174-1.4-5.566-4.4-1.172-8.968-2.479-17.944-3.001-26.96-.26-4.484-1.936-5.705-6.005-5.774-9.284-.158-18.563-.594-27.843-.953-7.241-.28-10.137-2.764-11.3-9.899-.746-4.576-2.715-7.801-7.777-8.207-7.739-.621-15.511-.992-23.207-1.961-7.327-.923-14.587-2.415-21.853-3.777-5.021-.941-10.003-2.086-15.003-3.14 4.515-22.952 13.122-44.382 26.284-63.587 18.054-26.344 41.439-47.239 69.102-63.294 15.847-9.197 32.541-16.277 50.376-20.599 16.655-4.036 33.617-5.715 50.622-4.385 33.334 2.606 63.836 13.955 92.415 31.15 15.864 9.545 30.241 20.86 42.269 34.758 8.113 9.374 15.201 19.78 21.718 30.359 10.772 17.484 16.846 36.922 20.611 56.991 1.783 9.503 2.815 19.214 3.318 28.876.758 14.578.755 29.196.65 44.311l-51.545 20.013c-7.779 3.059-15.847 5.376-21.753 12.365-4.73 5.598-10.658 10.316-16.547 14.774-9.9 7.496-18.437 15.988-25.083 26.631-3.333 5.337-7.901 10.381-12.999 14.038-11.355 8.144-17.397 18.973-19.615 32.423l-6.988 41.011z"></path><path fill="#ec663e" d="M318.11 923.047c-.702 17.693-.832 35.433-2.255 53.068-1.699 21.052-6.293 41.512-14.793 61.072-9.001 20.711-21.692 38.693-38.496 53.583-16.077 14.245-34.602 24.163-55.333 30.438-21.691 6.565-43.814 8.127-66.013 6.532-22.771-1.636-43.88-9.318-62.74-22.705-20.223-14.355-35.542-32.917-48.075-54.096-9.588-16.203-16.104-33.55-19.201-52.015-2.339-13.944-2.307-28.011-.403-42.182 2.627-19.545 9.021-37.699 17.963-55.067 11.617-22.564 27.317-41.817 48.382-56.118 15.819-10.74 33.452-17.679 52.444-20.455 8.77-1.282 17.696-1.646 26.568-2.055 11.755-.542 23.534-.562 35.289-1.11 8.545-.399 17.067-1.291 26.193-1.675 1.349 1.77 2.24 3.199 2.835 4.742 4.727 12.261 10.575 23.865 18.636 34.358 7.747 10.084 14.83 20.684 22.699 30.666 3.919 4.972 8.37 9.96 13.609 13.352 7.711 4.994 16.238 8.792 24.617 12.668 5.852 2.707 12.037 4.691 18.074 6.998z"></path><path fill="#ea580e" d="M1285.167 162.995c3.796-29.75 13.825-56.841 32.74-80.577 16.339-20.505 36.013-36.502 59.696-47.614 14.666-6.881 29.971-11.669 46.208-12.749 10.068-.669 20.239-1.582 30.255-.863 16.6 1.191 32.646 5.412 47.9 12.273 19.39 8.722 36.44 20.771 50.582 36.655 15.281 17.162 25.313 37.179 31.49 59.286 5.405 19.343 6.31 39.161 4.705 58.825-2.37 29.045-11.836 55.923-30.451 78.885-10.511 12.965-22.483 24.486-37.181 33.649-5.272-5.613-10.008-11.148-14.539-16.846-5.661-7.118-10.958-14.533-16.78-21.513-4.569-5.478-9.548-10.639-14.624-15.658-3.589-3.549-7.411-6.963-11.551-9.827-5.038-3.485-10.565-6.254-15.798-9.468-8.459-5.195-17.011-9.669-26.988-11.898-12.173-2.72-24.838-4.579-35.622-11.834-1.437-.967-3.433-1.192-5.213-1.542-12.871-2.529-25.454-5.639-36.968-12.471-5.21-3.091-11.564-4.195-17.011-6.965-4.808-2.445-8.775-6.605-13.646-8.851-8.859-4.085-18.114-7.311-27.204-10.896z"></path><path fill="#f8ab00" d="M524.963 311.12c-9.461-5.684-19.513-10.592-28.243-17.236-12.877-9.801-24.031-21.578-32.711-35.412-11.272-17.965-19.605-37.147-21.902-58.403-1.291-11.951-2.434-24.073-1.87-36.034.823-17.452 4.909-34.363 11.581-50.703 8.82-21.603 22.25-39.792 39.568-55.065 18.022-15.894 39.162-26.07 62.351-32.332 19.22-5.19 38.842-6.177 58.37-4.674 23.803 1.831 45.56 10.663 65.062 24.496 17.193 12.195 31.688 27.086 42.894 45.622-11.403 8.296-22.633 16.117-34.092 23.586-17.094 11.142-34.262 22.106-48.036 37.528-8.796 9.848-17.201 20.246-27.131 28.837-16.859 14.585-27.745 33.801-41.054 51.019-11.865 15.349-20.663 33.117-30.354 50.08-5.303 9.283-9.654 19.11-14.434 28.692z"></path><path fill="#ea5227" d="M1060.11 1122.049c-7.377 1.649-14.683 4.093-22.147 4.763-11.519 1.033-23.166 1.441-34.723 1.054-19.343-.647-38.002-4.7-55.839-12.65-15.078-6.72-28.606-15.471-40.571-26.836-24.013-22.81-42.053-49.217-49.518-81.936-1.446-6.337-1.958-12.958-2.235-19.477-.591-13.926-.219-27.909-1.237-41.795-.916-12.5-3.16-24.904-4.408-37.805 1.555-1.381 3.134-2.074 3.778-3.27 4.729-8.79 12.141-15.159 19.083-22.03 5.879-5.818 10.688-12.76 16.796-18.293 6.993-6.335 11.86-13.596 14.364-22.612l8.542-29.993c8.015 1.785 15.984 3.821 24.057 5.286 8.145 1.478 16.371 2.59 24.602 3.493 8.453.927 16.956 1.408 25.891 2.609 1.119 16.09 1.569 31.667 2.521 47.214.676 11.045 1.396 22.154 3.234 33.043 2.418 14.329 5.708 28.527 9.075 42.674 3.499 14.705 4.028 29.929 10.415 44.188 10.157 22.674 18.29 46.25 28.281 69.004 7.175 16.341 12.491 32.973 15.078 50.615.645 4.4 3.256 8.511 4.963 12.755z"></path><path fill="#ea5330" d="M1060.512 1122.031c-2.109-4.226-4.72-8.337-5.365-12.737-2.587-17.642-7.904-34.274-15.078-50.615-9.991-22.755-18.124-46.33-28.281-69.004-6.387-14.259-6.916-29.482-10.415-44.188-3.366-14.147-6.656-28.346-9.075-42.674-1.838-10.889-2.558-21.999-3.234-33.043-.951-15.547-1.401-31.124-2.068-47.146 8.568-.18 17.146.487 25.704.286l41.868-1.4c.907 3.746 1.245 7.04 1.881 10.276l8.651 42.704c.903 4.108 2.334 8.422 4.696 11.829 7.165 10.338 14.809 20.351 22.456 30.345 4.218 5.512 8.291 11.304 13.361 15.955 8.641 7.927 18.065 14.995 27.071 22.532 12.011 10.052 24.452 19.302 40.151 22.854-1.656 11.102-2.391 22.44-5.172 33.253-4.792 18.637-12.38 36.209-23.412 52.216-13.053 18.94-29.086 34.662-49.627 45.055-10.757 5.443-22.443 9.048-34.111 13.501z"></path><path fill="#f8aa05" d="M1989.106 883.951c5.198 8.794 11.46 17.148 15.337 26.491 5.325 12.833 9.744 26.207 12.873 39.737 2.95 12.757 3.224 25.908 1.987 39.219-1.391 14.973-4.643 29.268-10.349 43.034-5.775 13.932-13.477 26.707-23.149 38.405-14.141 17.104-31.215 30.458-50.807 40.488-14.361 7.352-29.574 12.797-45.741 14.594-10.297 1.144-20.732 2.361-31.031 1.894-24.275-1.1-47.248-7.445-68.132-20.263-6.096-3.741-11.925-7.917-17.731-12.342 5.319-5.579 10.361-10.852 15.694-15.811l37.072-34.009c.975-.892 2.113-1.606 3.08-2.505 6.936-6.448 14.765-12.2 20.553-19.556 8.88-11.285 20.064-19.639 31.144-28.292 4.306-3.363 9.06-6.353 12.673-10.358 5.868-6.504 10.832-13.814 16.422-20.582 6.826-8.264 13.727-16.481 20.943-24.401 4.065-4.461 8.995-8.121 13.249-12.424 14.802-14.975 28.77-30.825 45.913-43.317z"></path><path fill="#ed6876" d="M1256.099 523.419c5.065.642 10.047 1.787 15.068 2.728 7.267 1.362 14.526 2.854 21.853 3.777 7.696.97 15.468 1.34 23.207 1.961 5.062.406 7.031 3.631 7.777 8.207 1.163 7.135 4.059 9.62 11.3 9.899l27.843.953c4.069.069 5.745 1.291 6.005 5.774.522 9.016 1.829 17.992 3.001 26.96.392 3 2.246 4.417 5.566 4.4 5.239-.026 10.48.469 16.385.778l-1.102 10.495-1.875 18.726c-.44 3.74.088 6.105 4.751 6.012 4.11-.082 8.239.568 12.352.975 2.823.28 3.984 1.892 3.794 4.753-.428 6.444-.052 13.028-1.276 19.306-.986 5.059.651 6.724 5.065 7.31 1.793.238 3.566.634 6.12 1.098 0 4.189.243 8.545-.068 12.861-.319 4.43-1.27 8.811-1.77 13.233-.285 2.528-.461 5.379 3.332 5.331 13.729-.173 11.316 2.968 12.332 14.419.658 7.423-1.066 15.045-1.554 22.589-.145 2.241.28 4.519.519 7.827 2.026-1.452 2.733-2.167 3.589-2.521 1.142-.472 2.713-1.244 3.518-.81.767.414 1.114 2.221 1.081 3.4l-.917 24.539c-11.215.82-22.45.899-33.636 1.674l-43.952 3.436c-1.086-3.01-2.319-5.571-2.296-8.121.084-9.297-4.468-16.583-9.091-24.116-3.872-6.308-8.764-13.052-9.479-19.987-1.071-10.392-5.716-15.936-14.889-18.979-1.097-.364-2.16-.844-3.214-1.327-7.478-3.428-15.548-5.918-19.059-14.735-.904-2.27-3.657-3.775-5.461-5.723-2.437-2.632-4.615-5.525-7.207-7.987-2.648-2.515-5.352-5.346-8.589-6.777-4.799-2.121-10.074-3.185-15.175-4.596l-15.785-4.155c.274-12.896 1.722-25.901.54-38.662-1.647-17.783-3.457-35.526-2.554-53.352.528-10.426 2.539-20.777 3.948-31.574z"></path><path fill="#f6a200" d="M525.146 311.436c4.597-9.898 8.947-19.725 14.251-29.008 9.691-16.963 18.49-34.73 30.354-50.08 13.309-17.218 24.195-36.434 41.054-51.019 9.93-8.591 18.335-18.989 27.131-28.837 13.774-15.422 30.943-26.386 48.036-37.528 11.459-7.469 22.688-15.29 34.243-23.286 11.705 16.744 19.716 35.424 22.534 55.717 2.231 16.066 2.236 32.441 2.753 49.143-4.756 1.62-9.284 2.234-13.259 4.056-6.43 2.948-12.193 7.513-18.774 9.942-19.863 7.331-33.806 22.349-47.926 36.784-7.86 8.035-13.511 18.275-19.886 27.705-4.434 6.558-9.345 13.037-12.358 20.254-4.249 10.177-6.94 21.004-10.296 31.553-12.33.053-24.741 1.027-36.971-.049-20.259-1.783-40.227-5.567-58.755-14.69-.568-.28-1.295-.235-2.132-.658z"></path><path fill="#f7a80d" d="M1989.057 883.598c-17.093 12.845-31.061 28.695-45.863 43.67-4.254 4.304-9.184 7.963-13.249 12.424-7.216 7.92-14.117 16.137-20.943 24.401-5.59 6.768-10.554 14.078-16.422 20.582-3.614 4.005-8.367 6.995-12.673 10.358-11.08 8.653-22.264 17.007-31.144 28.292-5.788 7.356-13.617 13.108-20.553 19.556-.967.899-2.105 1.614-3.08 2.505l-37.072 34.009c-5.333 4.96-10.375 10.232-15.859 15.505-21.401-17.218-37.461-38.439-48.623-63.592 3.503-1.781 7.117-2.604 9.823-4.637 8.696-6.536 20.392-8.406 27.297-17.714.933-1.258 2.646-1.973 4.065-2.828 17.878-10.784 36.338-20.728 53.441-32.624 10.304-7.167 18.637-17.23 27.583-26.261 3.819-3.855 7.436-8.091 10.3-12.681 12.283-19.68 24.43-39.446 40.382-56.471 12.224-13.047 17.258-29.524 22.539-45.927 15.85 4.193 29.819 12.129 42.632 22.08 10.583 8.219 19.782 17.883 27.42 29.351z"></path><path fill="#ef7a72" d="M1479.461 758.907c1.872-13.734 4.268-27.394 6.525-41.076 2.218-13.45 8.26-24.279 19.615-32.423 5.099-3.657 9.667-8.701 12.999-14.038 6.646-10.643 15.183-19.135 25.083-26.631 5.888-4.459 11.817-9.176 16.547-14.774 5.906-6.99 13.974-9.306 21.753-12.365l51.48-19.549c.753 11.848.658 23.787 1.641 35.637 1.771 21.353 4.075 42.672 11.748 62.955.17.449.107.985-.019 2.158-6.945 4.134-13.865 7.337-20.437 11.143-3.935 2.279-7.752 5.096-10.869 8.384-6.011 6.343-11.063 13.624-17.286 19.727-9.096 8.92-12.791 20.684-18.181 31.587-.202.409-.072.984-.096 1.481-8.488-1.72-16.937-3.682-25.476-5.094-9.689-1.602-19.426-3.084-29.201-3.949-15.095-1.335-30.241-2.1-45.828-3.172z"></path><path fill="#e94e3b" d="M957.995 766.838c-20.337-5.467-38.791-14.947-55.703-27.254-8.2-5.967-15.451-13.238-22.958-20.37 2.969-3.504 5.564-6.772 8.598-9.563 7.085-6.518 11.283-14.914 15.8-23.153 4.933-8.996 10.345-17.743 14.966-26.892 2.642-5.231 5.547-11.01 5.691-16.611.12-4.651.194-8.932 2.577-12.742 8.52-13.621 15.483-28.026 18.775-43.704 2.11-10.049 7.888-18.774 7.81-29.825-.064-9.089 4.291-18.215 6.73-27.313 3.212-11.983 7.369-23.797 9.492-35.968 3.202-18.358 5.133-36.945 7.346-55.466l4.879-45.8c6.693.288 13.386.575 20.54 1.365.13 3.458-.41 6.407-.496 9.37l-1.136 42.595c-.597 11.552-2.067 23.058-3.084 34.59l-3.845 44.478c-.939 10.202-1.779 20.432-3.283 30.557-.96 6.464-4.46 12.646-1.136 19.383.348.706-.426 1.894-.448 2.864-.224 9.918-5.99 19.428-2.196 29.646.103.279-.033.657-.092.983l-8.446 46.205c-1.231 6.469-2.936 12.846-4.364 19.279-1.5 6.757-2.602 13.621-4.456 20.277-3.601 12.93-10.657 25.3-5.627 39.47.368 1.036.234 2.352.017 3.476l-5.949 30.123z"></path><path fill="#ea5043" d="M958.343 767.017c1.645-10.218 3.659-20.253 5.602-30.302.217-1.124.351-2.44-.017-3.476-5.03-14.17 2.026-26.539 5.627-39.47 1.854-6.656 2.956-13.52 4.456-20.277 1.428-6.433 3.133-12.81 4.364-19.279l8.446-46.205c.059-.326.196-.705.092-.983-3.794-10.218 1.972-19.728 2.196-29.646.022-.97.796-2.158.448-2.864-3.324-6.737.176-12.919 1.136-19.383 1.504-10.125 2.344-20.355 3.283-30.557l3.845-44.478c1.017-11.532 2.488-23.038 3.084-34.59.733-14.18.722-28.397 1.136-42.595.086-2.963.626-5.912.956-9.301 5.356-.48 10.714-.527 16.536-.081 2.224 15.098 1.855 29.734 1.625 44.408-.157 10.064 1.439 20.142 1.768 30.23.334 10.235-.035 20.49.116 30.733.084 5.713.789 11.418.861 17.13.054 4.289-.469 8.585-.702 12.879-.072 1.323-.138 2.659-.031 3.975l2.534 34.405-1.707 36.293-1.908 48.69c-.182 8.103.993 16.237.811 24.34-.271 12.076-1.275 24.133-1.787 36.207-.102 2.414-.101 5.283 1.06 7.219 4.327 7.22 4.463 15.215 4.736 23.103.365 10.553.088 21.128.086 31.693-11.44 2.602-22.84.688-34.106-.916-11.486-1.635-22.806-4.434-34.546-6.903z"></path><path fill="#eb5d19" d="M398.091 622.45c6.086.617 12.21 1.288 18.067 2.918 3.539.985 6.779 3.277 9.952 5.297 9.773 6.224 18.971 13.583 29.311 18.611 8.606 4.184 12.839 10.986 17.016 18.559l18.571 32.959c1.814 3.102 4.285 5.931 6.883 8.443 8.835 8.542 10.052 20.175 13.16 31.095 2.082 7.317 4.609 14.507 6.946 22.127-29.472 3.021-58.969 5.582-87.584 15.222-1.185-2.302-1.795-4.362-2.769-6.233-4.398-8.449-6.703-18.174-14.942-24.299-2.511-1.866-5.103-3.814-7.047-6.218-8.358-10.332-17.028-20.276-28.772-26.973 4.423-11.478 9.299-22.806 13.151-34.473 4.406-13.348 6.724-27.18 6.998-41.313.098-5.093.643-10.176 1.06-15.722z"></path><path fill="#e94c32" d="M981.557 392.109c-1.172 15.337-2.617 30.625-4.438 45.869-2.213 18.521-4.144 37.108-7.346 55.466-2.123 12.171-6.28 23.985-9.492 35.968-2.439 9.098-6.794 18.224-6.73 27.313.078 11.051-5.7 19.776-7.81 29.825-3.292 15.677-10.255 30.082-18.775 43.704-2.383 3.81-2.458 8.091-2.577 12.742-.144 5.6-3.049 11.38-5.691 16.611-4.621 9.149-10.033 17.896-14.966 26.892-4.517 8.239-8.715 16.635-15.8 23.153-3.034 2.791-5.629 6.06-8.735 9.255-12.197-10.595-21.071-23.644-29.301-37.24-7.608-12.569-13.282-25.962-17.637-40.37 13.303-6.889 25.873-13.878 35.311-25.315.717-.869 1.934-1.312 2.71-2.147 5.025-5.405 10.515-10.481 14.854-16.397 6.141-8.374 10.861-17.813 17.206-26.008 8.22-10.618 13.657-22.643 20.024-34.466 4.448-.626 6.729-3.21 8.114-6.89 1.455-3.866 2.644-7.895 4.609-11.492 4.397-8.05 9.641-15.659 13.708-23.86 3.354-6.761 5.511-14.116 8.203-21.206 5.727-15.082 7.277-31.248 12.521-46.578 3.704-10.828 3.138-23.116 4.478-34.753l7.56-.073z"></path><path fill="#f7a617" d="M1918.661 831.99c-4.937 16.58-9.971 33.057-22.196 46.104-15.952 17.025-28.099 36.791-40.382 56.471-2.864 4.59-6.481 8.825-10.3 12.681-8.947 9.031-17.279 19.094-27.583 26.261-17.103 11.896-35.564 21.84-53.441 32.624-1.419.856-3.132 1.571-4.065 2.828-6.904 9.308-18.6 11.178-27.297 17.714-2.705 2.033-6.319 2.856-9.874 4.281-3.413-9.821-6.916-19.583-9.36-29.602-1.533-6.284-1.474-12.957-1.665-19.913 1.913-.78 3.374-1.057 4.81-1.431 15.822-4.121 31.491-8.029 43.818-20.323 9.452-9.426 20.371-17.372 30.534-26.097 6.146-5.277 13.024-10.052 17.954-16.326 14.812-18.848 28.876-38.285 43.112-57.581 2.624-3.557 5.506-7.264 6.83-11.367 2.681-8.311 4.375-16.94 6.476-25.438 17.89.279 35.333 3.179 52.629 9.113z"></path><path fill="#ea553a" d="M1172.91 977.582c-15.775-3.127-28.215-12.377-40.227-22.43-9.005-7.537-18.43-14.605-27.071-22.532-5.07-4.651-9.143-10.443-13.361-15.955-7.647-9.994-15.291-20.007-22.456-30.345-2.361-3.407-3.792-7.72-4.696-11.829-3.119-14.183-5.848-28.453-8.651-42.704-.636-3.236-.974-6.53-1.452-10.209 15.234-2.19 30.471-3.969 46.408-5.622 2.692 5.705 4.882 11.222 6.63 16.876 2.9 9.381 7.776 17.194 15.035 24.049 7.056 6.662 13.305 14.311 19.146 22.099 9.509 12.677 23.01 19.061 36.907 25.054-1.048 7.441-2.425 14.854-3.066 22.33-.956 11.162-1.393 22.369-2.052 33.557l-1.096 17.661z"></path><path fill="#ea5453" d="M1163.123 704.036c-4.005 5.116-7.685 10.531-12.075 15.293-12.842 13.933-27.653 25.447-44.902 34.538-3.166-5.708-5.656-11.287-8.189-17.251-3.321-12.857-6.259-25.431-9.963-37.775-4.6-15.329-10.6-30.188-11.349-46.562-.314-6.871-1.275-14.287-7.114-19.644-1.047-.961-1.292-3.053-1.465-4.67l-4.092-39.927c-.554-5.245-.383-10.829-2.21-15.623-3.622-9.503-4.546-19.253-4.688-29.163-.088-6.111 1.068-12.256.782-18.344-.67-14.281-1.76-28.546-2.9-42.8-.657-8.222-1.951-16.395-2.564-24.62-.458-6.137-.285-12.322-.104-18.21.959 5.831 1.076 11.525 2.429 16.909 2.007 7.986 5.225 15.664 7.324 23.632 3.222 12.23 1.547 25.219 6.728 37.355 4.311 10.099 6.389 21.136 9.732 31.669 2.228 7.02 6.167 13.722 7.121 20.863 1.119 8.376 6.1 13.974 10.376 20.716l2.026 10.576c1.711 9.216 3.149 18.283 8.494 26.599 6.393 9.946 11.348 20.815 16.943 31.276 4.021 7.519 6.199 16.075 12.925 22.065l24.462 22.26c.556.503 1.507.571 2.274.841z"></path><path fill="#ea5b15" d="M1285.092 163.432c9.165 3.148 18.419 6.374 27.279 10.459 4.871 2.246 8.838 6.406 13.646 8.851 5.446 2.77 11.801 3.874 17.011 6.965 11.514 6.831 24.097 9.942 36.968 12.471 1.78.35 3.777.576 5.213 1.542 10.784 7.255 23.448 9.114 35.622 11.834 9.977 2.23 18.529 6.703 26.988 11.898 5.233 3.214 10.76 5.983 15.798 9.468 4.14 2.864 7.962 6.279 11.551 9.827 5.076 5.02 10.056 10.181 14.624 15.658 5.822 6.98 11.119 14.395 16.78 21.513 4.531 5.698 9.267 11.233 14.222 16.987-10.005 5.806-20.07 12.004-30.719 16.943-7.694 3.569-16.163 5.464-24.688 7.669-2.878-7.088-5.352-13.741-7.833-20.392-.802-2.15-1.244-4.55-2.498-6.396-4.548-6.7-9.712-12.999-14.011-19.847-6.672-10.627-15.34-18.93-26.063-25.376-9.357-5.625-18.367-11.824-27.644-17.587-6.436-3.997-12.902-8.006-19.659-11.405-5.123-2.577-11.107-3.536-16.046-6.37-17.187-9.863-35.13-17.887-54.031-23.767-4.403-1.37-8.953-2.267-13.436-3.382l.926-27.565z"></path><path fill="#ea504b" d="M1098 737l7.789 16.893c-15.04 9.272-31.679 15.004-49.184 17.995-9.464 1.617-19.122 2.097-29.151 3.019-.457-10.636-.18-21.211-.544-31.764-.273-7.888-.409-15.883-4.736-23.103-1.16-1.936-1.162-4.805-1.06-7.219l1.787-36.207c.182-8.103-.993-16.237-.811-24.34.365-16.236 1.253-32.461 1.908-48.69.484-12 .942-24.001 1.98-36.069 5.57 10.19 10.632 20.42 15.528 30.728 1.122 2.362 2.587 5.09 2.339 7.488-1.536 14.819 5.881 26.839 12.962 38.33 10.008 16.241 16.417 33.54 20.331 51.964 2.285 10.756 4.729 21.394 11.958 30.165L1098 737z"></path><path fill="#f6a320" d="M1865.78 822.529c-1.849 8.846-3.544 17.475-6.224 25.786-1.323 4.102-4.206 7.81-6.83 11.367l-43.112 57.581c-4.93 6.273-11.808 11.049-17.954 16.326-10.162 8.725-21.082 16.671-30.534 26.097-12.327 12.294-27.997 16.202-43.818 20.323-1.436.374-2.897.651-4.744.986-1.107-17.032-1.816-34.076-2.079-51.556 1.265-.535 2.183-.428 2.888-.766 10.596-5.072 20.8-11.059 32.586-13.273 1.69-.317 3.307-1.558 4.732-2.662l26.908-21.114c4.992-4.003 11.214-7.393 14.381-12.585 11.286-18.5 22.363-37.263 27.027-58.87l36.046 1.811c3.487.165 6.983.14 10.727.549z"></path><path fill="#ec6333" d="M318.448 922.814c-6.374-2.074-12.56-4.058-18.412-6.765-8.379-3.876-16.906-7.675-24.617-12.668-5.239-3.392-9.69-8.381-13.609-13.352-7.87-9.983-14.953-20.582-22.699-30.666-8.061-10.493-13.909-22.097-18.636-34.358-.595-1.543-1.486-2.972-2.382-4.783 6.84-1.598 13.797-3.023 20.807-4.106 18.852-2.912 36.433-9.493 53.737-17.819.697.888.889 1.555 1.292 2.051l17.921 21.896c4.14 4.939 8.06 10.191 12.862 14.412 5.67 4.984 12.185 9.007 18.334 13.447-8.937 16.282-16.422 33.178-20.696 51.31-1.638 6.951-2.402 14.107-3.903 21.403z"></path><path fill="#f49700" d="M623.467 326.903c2.893-10.618 5.584-21.446 9.833-31.623 3.013-7.217 7.924-13.696 12.358-20.254 6.375-9.43 12.026-19.67 19.886-27.705 14.12-14.434 28.063-29.453 47.926-36.784 6.581-2.429 12.344-6.994 18.774-9.942 3.975-1.822 8.503-2.436 13.186-3.592 1.947 18.557 3.248 37.15 8.307 55.686-15.453 7.931-28.853 18.092-40.46 29.996-10.417 10.683-19.109 23.111-28.013 35.175-3.238 4.388-4.888 9.948-7.262 14.973-17.803-3.987-35.767-6.498-54.535-5.931z"></path><path fill="#ea544c" d="M1097.956 736.615c-2.925-3.218-5.893-6.822-8.862-10.425-7.229-8.771-9.672-19.409-11.958-30.165-3.914-18.424-10.323-35.722-20.331-51.964-7.081-11.491-14.498-23.511-12.962-38.33.249-2.398-1.217-5.126-2.339-7.488l-15.232-31.019-3.103-34.338c-.107-1.316-.041-2.653.031-3.975.233-4.294.756-8.59.702-12.879-.072-5.713-.776-11.417-.861-17.13l-.116-30.733c-.329-10.088-1.926-20.166-1.768-30.23.23-14.674.599-29.31-1.162-44.341 9.369-.803 18.741-1.179 28.558-1.074 1.446 15.814 2.446 31.146 3.446 46.478.108 6.163-.064 12.348.393 18.485.613 8.225 1.907 16.397 2.564 24.62l2.9 42.8c.286 6.088-.869 12.234-.782 18.344.142 9.91 1.066 19.661 4.688 29.163 1.827 4.794 1.657 10.377 2.21 15.623l4.092 39.927c.172 1.617.417 3.71 1.465 4.67 5.839 5.357 6.8 12.773 7.114 19.644.749 16.374 6.749 31.233 11.349 46.562 3.704 12.344 6.642 24.918 9.963 37.775z"></path><path fill="#ec5c61" d="M1204.835 568.008c1.254 25.351-1.675 50.16-10.168 74.61-8.598-4.883-18.177-8.709-24.354-15.59-7.44-8.289-13.929-17.442-21.675-25.711-8.498-9.072-16.731-18.928-21.084-31.113-.54-1.513-1.691-2.807-2.594-4.564-4.605-9.247-7.706-18.544-7.96-29.09-.835-7.149-1.214-13.944-2.609-20.523-2.215-10.454-5.626-20.496-7.101-31.302-2.513-18.419-7.207-36.512-5.347-55.352.24-2.43-.17-4.949-.477-7.402l-4.468-34.792c2.723-.379 5.446-.757 8.585-.667 1.749 8.781 2.952 17.116 4.448 25.399 1.813 10.037 3.64 20.084 5.934 30.017 1.036 4.482 3.953 8.573 4.73 13.064 1.794 10.377 4.73 20.253 9.272 29.771 2.914 6.105 4.761 12.711 7.496 18.912 2.865 6.496 6.264 12.755 9.35 19.156 3.764 7.805 7.667 15.013 16.1 19.441 7.527 3.952 13.713 10.376 20.983 14.924 6.636 4.152 13.932 7.25 20.937 10.813z"></path><path fill="#ed676f" d="M1140.75 379.231c18.38-4.858 36.222-11.21 53.979-18.971 3.222 3.368 5.693 6.744 8.719 9.512 2.333 2.134 5.451 5.07 8.067 4.923 7.623-.429 12.363 2.688 17.309 8.215 5.531 6.18 12.744 10.854 19.224 16.184-5.121 7.193-10.461 14.241-15.323 21.606-13.691 20.739-22.99 43.255-26.782 67.926-.543 3.536-1.281 7.043-2.366 10.925-14.258-6.419-26.411-14.959-32.731-29.803-1.087-2.553-2.596-4.93-3.969-7.355-1.694-2.993-3.569-5.89-5.143-8.943-1.578-3.062-2.922-6.249-4.295-9.413-1.57-3.621-3.505-7.163-4.47-10.946-1.257-4.93-.636-10.572-2.725-15.013-5.831-12.397-7.467-25.628-9.497-38.847z"></path><path fill="#ed656e" d="M1254.103 647.439c5.325.947 10.603 2.272 15.847 3.722 5.101 1.41 10.376 2.475 15.175 4.596 3.237 1.431 5.942 4.262 8.589 6.777 2.592 2.462 4.77 5.355 7.207 7.987 1.804 1.948 4.557 3.453 5.461 5.723 3.51 8.817 11.581 11.307 19.059 14.735 1.053.483 2.116.963 3.214 1.327 9.172 3.043 13.818 8.587 14.889 18.979.715 6.935 5.607 13.679 9.479 19.987 4.623 7.533 9.175 14.819 9.091 24.116-.023 2.55 1.21 5.111 1.874 8.055-19.861 2.555-39.795 4.296-59.597 9.09l-11.596-23.203c-1.107-2.169-2.526-4.353-4.307-5.975-7.349-6.694-14.863-13.209-22.373-19.723l-17.313-14.669c-2.776-2.245-5.935-4.017-8.92-6.003l11.609-38.185c1.508-5.453 1.739-11.258 2.613-17.336z"></path><path fill="#ec6168" d="M1140.315 379.223c2.464 13.227 4.101 26.459 9.931 38.856 2.089 4.441 1.468 10.083 2.725 15.013.965 3.783 2.9 7.325 4.47 10.946 1.372 3.164 2.716 6.351 4.295 9.413 1.574 3.053 3.449 5.95 5.143 8.943 1.372 2.425 2.882 4.803 3.969 7.355 6.319 14.844 18.473 23.384 32.641 30.212.067 5.121-.501 10.201-.435 15.271l.985 38.117c.151 4.586.616 9.162.868 14.201-7.075-3.104-14.371-6.202-21.007-10.354-7.269-4.548-13.456-10.972-20.983-14.924-8.434-4.428-12.337-11.637-16.1-19.441-3.087-6.401-6.485-12.66-9.35-19.156-2.735-6.201-4.583-12.807-7.496-18.912-4.542-9.518-7.477-19.394-9.272-29.771-.777-4.491-3.694-8.581-4.73-13.064-2.294-9.933-4.121-19.98-5.934-30.017-1.496-8.283-2.699-16.618-4.036-25.335 10.349-2.461 20.704-4.511 31.054-6.582.957-.191 1.887-.515 3.264-.769z"></path><path fill="#e94c28" d="M922 537c-6.003 11.784-11.44 23.81-19.66 34.428-6.345 8.196-11.065 17.635-17.206 26.008-4.339 5.916-9.828 10.992-14.854 16.397-.776.835-1.993 1.279-2.71 2.147-9.439 11.437-22.008 18.427-35.357 24.929-4.219-10.885-6.942-22.155-7.205-33.905l-.514-49.542c7.441-2.893 14.452-5.197 21.334-7.841 1.749-.672 3.101-2.401 4.604-3.681 6.749-5.745 12.845-12.627 20.407-16.944 7.719-4.406 14.391-9.101 18.741-16.889.626-1.122 1.689-2.077 2.729-2.877 7.197-5.533 12.583-12.51 16.906-20.439.68-1.247 2.495-1.876 4.105-2.651 2.835 1.408 5.267 2.892 7.884 3.892 3.904 1.491 4.392 3.922 2.833 7.439-1.47 3.318-2.668 6.756-4.069 10.106-1.247 2.981-.435 5.242 2.413 6.544 2.805 1.282 3.125 3.14 1.813 5.601l-6.907 12.799L922 537z"></path><path fill="#eb5659" d="M1124.995 566c.868 1.396 2.018 2.691 2.559 4.203 4.353 12.185 12.586 22.041 21.084 31.113 7.746 8.269 14.235 17.422 21.675 25.711 6.176 6.881 15.756 10.707 24.174 15.932-6.073 22.316-16.675 42.446-31.058 60.937-1.074-.131-2.025-.199-2.581-.702l-24.462-22.26c-6.726-5.99-8.904-14.546-12.925-22.065-5.594-10.461-10.55-21.33-16.943-31.276-5.345-8.315-6.783-17.383-8.494-26.599-.63-3.394-1.348-6.772-1.738-10.848-.371-6.313-1.029-11.934-1.745-18.052l6.34 4.04 1.288-.675-2.143-15.385 9.454 1.208v-8.545L1124.995 566z"></path><path fill="#f5a02d" d="M1818.568 820.096c-4.224 21.679-15.302 40.442-26.587 58.942-3.167 5.192-9.389 8.582-14.381 12.585l-26.908 21.114c-1.425 1.104-3.042 2.345-4.732 2.662-11.786 2.214-21.99 8.201-32.586 13.273-.705.338-1.624.231-2.824.334a824.35 824.35 0 0 1-8.262-42.708c4.646-2.14 9.353-3.139 13.269-5.47 5.582-3.323 11.318-6.942 15.671-11.652 7.949-8.6 14.423-18.572 22.456-27.081 8.539-9.046 13.867-19.641 18.325-30.922l46.559 8.922z"></path><path fill="#eb5a57" d="M1124.96 565.639c-5.086-4.017-10.208-8.395-15.478-12.901v8.545l-9.454-1.208 2.143 15.385-1.288.675-6.34-4.04c.716 6.118 1.375 11.74 1.745 17.633-4.564-6.051-9.544-11.649-10.663-20.025-.954-7.141-4.892-13.843-7.121-20.863-3.344-10.533-5.421-21.57-9.732-31.669-5.181-12.135-3.506-25.125-6.728-37.355-2.099-7.968-5.317-15.646-7.324-23.632-1.353-5.384-1.47-11.078-2.429-16.909l-3.294-46.689a278.63 278.63 0 0 1 27.57-2.084c2.114 12.378 3.647 24.309 5.479 36.195 1.25 8.111 2.832 16.175 4.422 24.23 1.402 7.103 2.991 14.169 4.55 21.241 1.478 6.706.273 14.002 4.6 20.088 5.401 7.597 7.176 16.518 9.467 25.337 1.953 7.515 5.804 14.253 11.917 19.406.254 10.095 3.355 19.392 7.96 28.639z"></path><path fill="#ea541c" d="M911.651 810.999c-2.511 10.165-5.419 20.146-8.2 30.162-2.503 9.015-7.37 16.277-14.364 22.612-6.108 5.533-10.917 12.475-16.796 18.293-6.942 6.871-14.354 13.24-19.083 22.03-.644 1.196-2.222 1.889-3.705 2.857-2.39-7.921-4.101-15.991-6.566-23.823-5.451-17.323-12.404-33.976-23.414-48.835l21.627-21.095c3.182-3.29 5.532-7.382 8.295-11.083l10.663-14.163c9.528 4.78 18.925 9.848 28.625 14.247 7.324 3.321 15.036 5.785 22.917 8.799z"></path><path fill="#eb5d19" d="M1284.092 191.421c4.557.69 9.107 1.587 13.51 2.957 18.901 5.881 36.844 13.904 54.031 23.767 4.938 2.834 10.923 3.792 16.046 6.37 6.757 3.399 13.224 7.408 19.659 11.405l27.644 17.587c10.723 6.446 19.392 14.748 26.063 25.376 4.299 6.848 9.463 13.147 14.011 19.847 1.254 1.847 1.696 4.246 2.498 6.396l7.441 20.332c-11.685 1.754-23.379 3.133-35.533 4.037-.737-2.093-.995-3.716-1.294-5.33-3.157-17.057-14.048-30.161-23.034-44.146-3.027-4.71-7.786-8.529-12.334-11.993-9.346-7.116-19.004-13.834-28.688-20.491-6.653-4.573-13.311-9.251-20.431-13.002-8.048-4.24-16.479-7.85-24.989-11.091-11.722-4.465-23.673-8.328-35.527-12.449l.927-19.572z"></path><path fill="#eb5e24" d="M1283.09 211.415c11.928 3.699 23.88 7.562 35.602 12.027 8.509 3.241 16.941 6.852 24.989 11.091 7.12 3.751 13.778 8.429 20.431 13.002 9.684 6.657 19.342 13.375 28.688 20.491 4.548 3.463 9.307 7.283 12.334 11.993 8.986 13.985 19.877 27.089 23.034 44.146.299 1.615.557 3.237.836 5.263-13.373-.216-26.749-.839-40.564-1.923-2.935-9.681-4.597-18.92-12.286-26.152-15.577-14.651-30.4-30.102-45.564-45.193-.686-.683-1.626-1.156-2.516-1.584l-47.187-22.615 2.203-20.546z"></path><path fill="#e9511f" d="M913 486.001c-1.29.915-3.105 1.543-3.785 2.791-4.323 7.929-9.709 14.906-16.906 20.439-1.04.8-2.103 1.755-2.729 2.877-4.35 7.788-11.022 12.482-18.741 16.889-7.562 4.317-13.658 11.199-20.407 16.944-1.503 1.28-2.856 3.009-4.604 3.681-6.881 2.643-13.893 4.948-21.262 7.377-.128-11.151.202-22.302.378-33.454.03-1.892-.6-3.795-.456-6.12 13.727-1.755 23.588-9.527 33.278-17.663 2.784-2.337 6.074-4.161 8.529-6.784l29.057-31.86c1.545-1.71 3.418-3.401 4.221-5.459 5.665-14.509 11.49-28.977 16.436-43.736 2.817-8.407 4.074-17.338 6.033-26.032 5.039.714 10.078 1.427 15.536 2.629-.909 8.969-2.31 17.438-3.546 25.931-2.41 16.551-5.84 32.839-11.991 48.461L913 486.001z"></path><path fill="#ea5741" d="M1179.451 903.828c-14.224-5.787-27.726-12.171-37.235-24.849-5.841-7.787-12.09-15.436-19.146-22.099-7.259-6.854-12.136-14.667-15.035-24.049-1.748-5.654-3.938-11.171-6.254-17.033 15.099-4.009 30.213-8.629 44.958-15.533l28.367 36.36c6.09 8.015 13.124 14.75 22.72 18.375-7.404 14.472-13.599 29.412-17.48 45.244-.271 1.106-.382 2.25-.895 3.583z"></path><path fill="#ea522a" d="M913.32 486.141c2.693-7.837 5.694-15.539 8.722-23.231 6.151-15.622 9.581-31.91 11.991-48.461l3.963-25.861c7.582.317 15.168 1.031 22.748 1.797 4.171.421 8.333.928 12.877 1.596-.963 11.836-.398 24.125-4.102 34.953-5.244 15.33-6.794 31.496-12.521 46.578-2.692 7.09-4.849 14.445-8.203 21.206-4.068 8.201-9.311 15.81-13.708 23.86-1.965 3.597-3.154 7.627-4.609 11.492-1.385 3.68-3.666 6.265-8.114 6.89-1.994-1.511-3.624-3.059-5.077-4.44l6.907-12.799c1.313-2.461.993-4.318-1.813-5.601-2.849-1.302-3.66-3.563-2.413-6.544 1.401-3.35 2.599-6.788 4.069-10.106 1.558-3.517 1.071-5.948-2.833-7.439-2.617-1-5.049-2.484-7.884-3.892z"></path><path fill="#eb5e24" d="M376.574 714.118c12.053 6.538 20.723 16.481 29.081 26.814 1.945 2.404 4.537 4.352 7.047 6.218 8.24 6.125 10.544 15.85 14.942 24.299.974 1.871 1.584 3.931 2.376 6.29-7.145 3.719-14.633 6.501-21.386 10.517-9.606 5.713-18.673 12.334-28.425 18.399-3.407-3.73-6.231-7.409-9.335-10.834l-30.989-33.862c11.858-11.593 22.368-24.28 31.055-38.431 1.86-3.031 3.553-6.164 5.632-9.409z"></path><path fill="#e95514" d="M859.962 787.636c-3.409 5.037-6.981 9.745-10.516 14.481-2.763 3.701-5.113 7.792-8.295 11.083-6.885 7.118-14.186 13.834-21.65 20.755-13.222-17.677-29.417-31.711-48.178-42.878-.969-.576-2.068-.934-3.27-1.709 6.28-8.159 12.733-15.993 19.16-23.849 1.459-1.783 2.718-3.738 4.254-5.448l18.336-19.969c4.909 5.34 9.619 10.738 14.081 16.333 9.72 12.19 21.813 21.566 34.847 29.867.411.262.725.674 1.231 1.334z"></path><path fill="#eb5f2d" d="M339.582 762.088l31.293 33.733c3.104 3.425 5.928 7.104 9.024 10.979-12.885 11.619-24.548 24.139-33.899 38.704-.872 1.359-1.56 2.837-2.644 4.428-6.459-4.271-12.974-8.294-18.644-13.278-4.802-4.221-8.722-9.473-12.862-14.412l-17.921-21.896c-.403-.496-.595-1.163-.926-2.105 16.738-10.504 32.58-21.87 46.578-36.154z"></path><path fill="#f28d00" d="M678.388 332.912c1.989-5.104 3.638-10.664 6.876-15.051 8.903-12.064 17.596-24.492 28.013-35.175 11.607-11.904 25.007-22.064 40.507-29.592 4.873 11.636 9.419 23.412 13.67 35.592-5.759 4.084-11.517 7.403-16.594 11.553-4.413 3.607-8.124 8.092-12.023 12.301-5.346 5.772-10.82 11.454-15.782 17.547-3.929 4.824-7.17 10.208-10.716 15.344l-33.95-12.518z"></path><path fill="#f08369" d="M1580.181 771.427c-.191-.803-.322-1.377-.119-1.786 5.389-10.903 9.084-22.666 18.181-31.587 6.223-6.103 11.276-13.385 17.286-19.727 3.117-3.289 6.933-6.105 10.869-8.384 6.572-3.806 13.492-7.009 20.461-10.752 1.773 3.23 3.236 6.803 4.951 10.251l12.234 24.993c-1.367 1.966-2.596 3.293-3.935 4.499-7.845 7.07-16.315 13.564-23.407 21.32-6.971 7.623-12.552 16.517-18.743 24.854l-37.777-13.68z"></path><path fill="#f18b5e" d="M1618.142 785.4c6.007-8.63 11.588-17.524 18.559-25.147 7.092-7.755 15.562-14.249 23.407-21.32 1.338-1.206 2.568-2.534 3.997-4.162l28.996 33.733c1.896 2.205 4.424 3.867 6.66 6.394-6.471 7.492-12.967 14.346-19.403 21.255l-18.407 19.953c-12.958-12.409-27.485-22.567-43.809-30.706z"></path><path fill="#f49c3a" d="M1771.617 811.1c-4.066 11.354-9.394 21.949-17.933 30.995-8.032 8.509-14.507 18.481-22.456 27.081-4.353 4.71-10.089 8.329-15.671 11.652-3.915 2.331-8.623 3.331-13.318 5.069-4.298-9.927-8.255-19.998-12.1-30.743 4.741-4.381 9.924-7.582 13.882-11.904 7.345-8.021 14.094-16.603 20.864-25.131 4.897-6.168 9.428-12.626 14.123-18.955l32.61 11.936z"></path><path fill="#f08000" d="M712.601 345.675c3.283-5.381 6.524-10.765 10.453-15.589 4.962-6.093 10.435-11.774 15.782-17.547 3.899-4.21 7.61-8.695 12.023-12.301 5.078-4.15 10.836-7.469 16.636-11.19a934.12 934.12 0 0 1 23.286 35.848c-4.873 6.234-9.676 11.895-14.63 17.421l-25.195 27.801c-11.713-9.615-24.433-17.645-38.355-24.443z"></path><path fill="#ed6e04" d="M751.11 370.42c8.249-9.565 16.693-18.791 25.041-28.103 4.954-5.526 9.757-11.187 14.765-17.106 7.129 6.226 13.892 13.041 21.189 19.225 5.389 4.567 11.475 8.312 17.53 12.92-5.51 7.863-10.622 15.919-17.254 22.427-8.881 8.716-18.938 16.233-28.49 24.264-5.703-6.587-11.146-13.427-17.193-19.682-4.758-4.921-10.261-9.121-15.587-13.944z"></path><path fill="#ea541c" d="M921.823 385.544c-1.739 9.04-2.995 17.971-5.813 26.378-4.946 14.759-10.771 29.227-16.436 43.736-.804 2.058-2.676 3.749-4.221 5.459l-29.057 31.86c-2.455 2.623-5.745 4.447-8.529 6.784-9.69 8.135-19.551 15.908-33.208 17.237-1.773-9.728-3.147-19.457-4.091-29.6l36.13-16.763c.581-.267 1.046-.812 1.525-1.269 8.033-7.688 16.258-15.19 24.011-23.152 4.35-4.467 9.202-9.144 11.588-14.69 6.638-15.425 15.047-30.299 17.274-47.358 3.536.344 7.072.688 10.829 1.377z"></path><path fill="#f3944d" d="M1738.688 798.998c-4.375 6.495-8.906 12.953-13.803 19.121-6.771 8.528-13.519 17.11-20.864 25.131-3.958 4.322-9.141 7.523-13.925 11.54-8.036-13.464-16.465-26.844-27.999-38.387 5.988-6.951 12.094-13.629 18.261-20.25l19.547-20.95 38.783 23.794z"></path><path fill="#ec6168" d="M1239.583 703.142c3.282 1.805 6.441 3.576 9.217 5.821 5.88 4.755 11.599 9.713 17.313 14.669l22.373 19.723c1.781 1.622 3.2 3.806 4.307 5.975 3.843 7.532 7.477 15.171 11.194 23.136-10.764 4.67-21.532 8.973-32.69 12.982l-22.733-27.366c-2.003-2.416-4.096-4.758-6.194-7.093-3.539-3.94-6.927-8.044-10.74-11.701-2.57-2.465-5.762-4.283-8.675-6.39l16.627-29.755z"></path><path fill="#ec663e" d="M1351.006 332.839l-28.499 10.33c-.294.107-.533.367-1.194.264-11.067-19.018-27.026-32.559-44.225-44.855-4.267-3.051-8.753-5.796-13.138-8.682l9.505-24.505c10.055 4.069 19.821 8.227 29.211 13.108 3.998 2.078 7.299 5.565 10.753 8.598 3.077 2.701 5.743 5.891 8.926 8.447 4.116 3.304 9.787 5.345 12.62 9.432 6.083 8.777 10.778 18.517 16.041 27.863z"></path><path fill="#eb5e5b" d="M1222.647 733.051c3.223 1.954 6.415 3.771 8.985 6.237 3.813 3.658 7.201 7.761 10.74 11.701l6.194 7.093 22.384 27.409c-13.056 6.836-25.309 14.613-36.736 24.161l-39.323-44.7 24.494-27.846c1.072-1.224 1.974-2.598 3.264-4.056z"></path><path fill="#ea580e" d="M876.001 376.171c5.874 1.347 11.748 2.694 17.812 4.789-.81 5.265-2.687 9.791-2.639 14.296.124 11.469-4.458 20.383-12.73 27.863-2.075 1.877-3.659 4.286-5.668 6.248l-22.808 21.967c-.442.422-1.212.488-1.813.757l-23.113 10.389-9.875 4.514c-2.305-6.09-4.609-12.181-6.614-18.676 7.64-4.837 15.567-8.54 22.18-13.873 9.697-7.821 18.931-16.361 27.443-25.455 5.613-5.998 12.679-11.331 14.201-20.475.699-4.2 2.384-8.235 3.623-12.345z"></path><path fill="#e95514" d="M815.103 467.384c3.356-1.894 6.641-3.415 9.94-4.903l23.113-10.389c.6-.269 1.371-.335 1.813-.757l22.808-21.967c2.008-1.962 3.593-4.371 5.668-6.248 8.272-7.48 12.854-16.394 12.73-27.863-.049-4.505 1.828-9.031 2.847-13.956 5.427.559 10.836 1.526 16.609 2.68-1.863 17.245-10.272 32.119-16.91 47.544-2.387 5.546-7.239 10.223-11.588 14.69-7.753 7.962-15.978 15.464-24.011 23.152-.478.458-.944 1.002-1.525 1.269l-36.069 16.355c-2.076-6.402-3.783-12.81-5.425-19.607z"></path><path fill="#eb620b" d="M783.944 404.402c9.499-8.388 19.556-15.905 28.437-24.621 6.631-6.508 11.744-14.564 17.575-22.273 9.271 4.016 18.501 8.375 27.893 13.43-4.134 7.07-8.017 13.778-12.833 19.731-5.785 7.15-12.109 13.917-18.666 20.376-7.99 7.869-16.466 15.244-24.731 22.832l-17.674-29.475z"></path><path fill="#ea544c" d="M1197.986 854.686c-9.756-3.309-16.79-10.044-22.88-18.059l-28.001-36.417c8.601-5.939 17.348-11.563 26.758-17.075 1.615 1.026 2.639 1.876 3.505 2.865l26.664 30.44c3.723 4.139 7.995 7.785 12.017 11.656l-18.064 26.591z"></path><path fill="#ec6333" d="M1351.41 332.903c-5.667-9.409-10.361-19.149-16.445-27.926-2.833-4.087-8.504-6.128-12.62-9.432-3.184-2.555-5.849-5.745-8.926-8.447-3.454-3.033-6.756-6.52-10.753-8.598-9.391-4.88-19.157-9.039-29.138-13.499 1.18-5.441 2.727-10.873 4.81-16.607 11.918 4.674 24.209 8.261 34.464 14.962 14.239 9.304 29.011 18.453 39.595 32.464 2.386 3.159 5.121 6.077 7.884 8.923 6.564 6.764 10.148 14.927 11.723 24.093l-20.594 4.067z"></path><path fill="#eb5e5b" d="M1117 536.549c-6.113-4.702-9.965-11.44-11.917-18.955-2.292-8.819-4.066-17.74-9.467-25.337-4.327-6.085-3.122-13.382-4.6-20.088l-4.55-21.241c-1.59-8.054-3.172-16.118-4.422-24.23l-5.037-36.129c6.382-1.43 12.777-2.462 19.582-3.443 1.906 11.646 3.426 23.24 4.878 34.842.307 2.453.717 4.973.477 7.402-1.86 18.84 2.834 36.934 5.347 55.352 1.474 10.806 4.885 20.848 7.101 31.302 1.394 6.579 1.774 13.374 2.609 20.523z"></path><path fill="#ec644b" d="M1263.638 290.071c4.697 2.713 9.183 5.458 13.45 8.509 17.199 12.295 33.158 25.836 43.873 44.907-8.026 4.725-16.095 9.106-24.83 13.372-11.633-15.937-25.648-28.515-41.888-38.689-1.609-1.008-3.555-1.48-5.344-2.2 2.329-3.852 4.766-7.645 6.959-11.573l7.78-14.326z"></path><path fill="#eb5f2d" d="M1372.453 328.903c-2.025-9.233-5.608-17.396-12.172-24.16-2.762-2.846-5.498-5.764-7.884-8.923-10.584-14.01-25.356-23.16-39.595-32.464-10.256-6.701-22.546-10.289-34.284-15.312.325-5.246 1.005-10.444 2.027-15.863l47.529 22.394c.89.428 1.83.901 2.516 1.584l45.564 45.193c7.69 7.233 9.352 16.472 11.849 26.084-5.032.773-10.066 1.154-15.55 1.466z"></path><path fill="#e95a0f" d="M801.776 434.171c8.108-7.882 16.584-15.257 24.573-23.126 6.558-6.459 12.881-13.226 18.666-20.376 4.817-5.953 8.7-12.661 13.011-19.409 5.739 1.338 11.463 3.051 17.581 4.838-.845 4.183-2.53 8.219-3.229 12.418-1.522 9.144-8.588 14.477-14.201 20.475-8.512 9.094-17.745 17.635-27.443 25.455-6.613 5.333-14.54 9.036-22.223 13.51-2.422-4.469-4.499-8.98-6.735-13.786z"></path><path fill="#eb5e5b" d="M1248.533 316.002c2.155.688 4.101 1.159 5.71 2.168 16.24 10.174 30.255 22.752 41.532 38.727-7.166 5.736-14.641 11.319-22.562 16.731-1.16-1.277-1.684-2.585-2.615-3.46l-38.694-36.2 14.203-15.029c.803-.86 1.38-1.93 2.427-2.936z"></path><path fill="#eb5a57" d="M1216.359 827.958c-4.331-3.733-8.603-7.379-12.326-11.518l-26.664-30.44c-.866-.989-1.89-1.839-3.152-2.902 6.483-6.054 13.276-11.959 20.371-18.005l39.315 44.704c-5.648 6.216-11.441 12.12-17.544 18.161z"></path><path fill="#ec6168" d="M1231.598 334.101l38.999 36.066c.931.876 1.456 2.183 2.303 3.608-4.283 4.279-8.7 8.24-13.769 12.091-4.2-3.051-7.512-6.349-11.338-8.867-12.36-8.136-22.893-18.27-32.841-29.093l16.646-13.805z"></path><path fill="#ed656e" d="M1214.597 347.955c10.303 10.775 20.836 20.908 33.196 29.044 3.825 2.518 7.137 5.816 10.992 8.903-3.171 4.397-6.65 8.648-10.432 13.046-6.785-5.184-13.998-9.858-19.529-16.038-4.946-5.527-9.687-8.644-17.309-8.215-2.616.147-5.734-2.788-8.067-4.923-3.026-2.769-5.497-6.144-8.35-9.568 6.286-4.273 12.715-8.237 19.499-12.25z"></path></svg> </p> <p align="center"> <b>The crispy rerank family from <a href="https://mixedbread.ai"><b>mixedbread ai</b></a>.</b> </p> # mxbai-colbert-v1 This is our first English ColBERT model, which is built upon our sentence embedding model [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1). You can learn more about the models in our [blog post](https://www.mixedbread.ai/blog/mxbai-colbert-large-v1). ## Quickstart We recommend using the [RAGatouille](https://github.com/bclavie/RAGatouille) for using our ColBERT model. ```sh pip install ragatouille ``` ```python from ragatouille import RAGPretrainedModel # Let's create a ragatouille instance RAG = RAGPretrainedModel.from_pretrained("mixedbread-ai/mxbai-colbert-v1") documents = [ "'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.", "The novel 'Moby-Dick' was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil.", "Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.", "Jane Austen was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century.", "The 'Harry Potter' series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era.", "'The Great Gatsby', a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan." ] # index documents RAG.index(documents, index_name="mockingbird") # search query = "Who wrote 'To Kill a Mockingbird'?" results = RAG.search(query) ``` The result looks like this: ``` [ { 'content': "'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.", 'score': 28.453125, 'rank': 1, 'document_id': '9d564e82-f14f-433a-ab40-b10bda9dc370', 'passage_id': 0 }, { 'content': "Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.", 'score': 27.03125, 'rank': 2, 'document_id': 'a35a89c3-b610-4e2e-863e-fa1e7e0710a6', 'passage_id': 2 }, ... ] ``` ## Evaluation ### 1. Reranking Performance **Setup:** we use BM25 as the first-stage retrieval model, and then use ColBERT for reranking. We evaluate the out-of-domain performance on 13 public BEIR datasets. Following common practice, we report NDCG@10 as the metrics. Here, we compare our model with two widely used ColBERT models, as follows: | Dataset | ColBERTv2 | Jina-ColBERT-v1 | mxbai-colbert-large-v1 | | ---------------- | --------: | --------------: | ---------------------: | | ArguAna | 29.99 | **33.42** | 33.11 | | ClimateFEVER | 16.51 | 20.66 | **20.85** | | DBPedia | 31.80 | **42.16** | 40.61 | | FEVER | 65.13 | **81.07** | 80.75 | | FiQA | 23.61 | 35.60 | **35.86** | | HotPotQA | 63.30 | **68.84** | 67.62 | | NFCorpus | 33.75 | **36.69** | 36.37 | | NQ | 30.55 | 51.27 | **51.43** | | Quora | 78.86 | 85.18 | **86.95** | | SCIDOCS | 14.90 | 15.39 | **16.98** | | SciFact | 67.89 | 70.2 | **71.48** | | TREC-COVID | 59.47 | 75.00 | **81.04** | | Webis-touché2020 | **44.22** | 32.12 | 31.70 | | Average | 43.08 | 49.82 | **50.37** | Find more in our [blog-post](https://www.mixedbread.ai/blog/mxbai-colbert-large-v1). ### 2. Retrieval Performance We also test our model's performance on retrieval tasks on a subset of the BEIR benchmarks. We'll be providing the full results for the benchmark soon (actively working on it). Due to resource limitations, we only test our model on three beir tasks. NDCG@10 servers as the main metric. | Model | ColBERTv2 | Jina-ColBERT-V1 | mxbai-colbert-large-v1 | | ---------- | --------: | --------------: | ---------------------: | | NFCorpus | 33.7 | 33.8 | **36.5** | | SciFact | 68.9 | 70.1 | **71.3** | | TREC-COVID | 72.6 | 75.0 | **80.5** | Although our ColBERT also performs well on retrieval tasks, we still recommend using our flagship embedding model [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) for that. ## Community Please join our [Discord Community](https://discord.gg/jDfMHzAVfU) and share your feedback and thoughts! We are here to help and also always happy to chat. ## License Apache 2.0
protectai/distilroberta-bias-onnx
protectai
"2024-03-25T12:05:45Z"
6,122
0
transformers
[ "transformers", "onnx", "roberta", "text-classification", "bias", "distilroberta", "en", "dataset:valurank/wikirev-bias", "base_model:valurank/distilroberta-bias", "autotrain_compatible", "region:us" ]
text-classification
"2023-11-12T20:23:22Z"
--- pipeline_tag: text-classification language: en datasets: - valurank/wikirev-bias inference: false tags: - bias - distilroberta base_model: valurank/distilroberta-bias --- # ONNX version of valurank/distilroberta-bias **This model is a conversion of [valurank/distilroberta-bias](https://huggingface.co/valurank/distilroberta-bias) to ONNX** format. It is designed to detect biases in text using the distilled version of the RoBERTa model. The model was converted to ONNX using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library. ## Model Architecture **Base Model**: DistilRoBERTa, a distilled version of the RoBERTa model that is optimized for faster performance while maintaining similar accuracy. **Modifications**: The model is converted to ONNX format with no additional changes. ## Usage ### Optimum Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed. ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("laiyer/distilroberta-bias-onnx") model = ORTModelForSequenceClassification.from_pretrained("laiyer/distilroberta-bias-onnx") classifier = pipeline( task="text-classification", model=model, tokenizer=tokenizer, ) classifier_output = classifier("Your text to analyze for bias.") score = (classifier_output[0]["score"] if classifier_output[0]["label"] == "BIASED" else 1 - classifier_output[0]["score"]) ``` ### LLM Guard [Bias scanner](https://llm-guard.com/output_scanners/bias/) ## Community Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security! <a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/laiyer-ai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>
duyntnet/notus-7b-v1-imatrix-GGUF
duyntnet
"2024-06-17T01:55:11Z"
6,119
0
transformers
[ "transformers", "gguf", "imatrix", "notus-7b-v1", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-06-16T23:47:15Z"
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - notus-7b-v1 --- Quantizations of https://huggingface.co/argilla/notus-7b-v1 # From original readme ## Prompt template We use the same prompt template as [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta): ``` <|system|> </s> <|user|> {prompt}</s> <|assistant|> ``` ## Usage You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following: ### Via `generate` ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("argilla/notus-7b-v1") messages = [ { "role": "system", "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.", }, {"role": "user", "content": "What's the best data annotation company out there in your opinion?"}, ] inputs = tokenizer.apply_chat_template(prompt, tokenize=True, return_tensors="pt", add_special_tokens=False, add_generation_prompt=True) outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### Via `pipeline` method ```python import torch from transformers import pipeline pipe = pipeline("text-generation", model="argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto") messages = [ { "role": "system", "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.", }, {"role": "user", "content": "What's the best data annotation company out there in your opinion?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) generated_text = outputs[0]["generated_text"] ```
stablediffusionapi/disney-pixal-cartoon-a
stablediffusionapi
"2024-01-11T09:51:58Z"
6,116
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-11T09:50:09Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # disney-pixal-cartoon-a API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/1853555661704965471.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "disney-pixal-cartoon-a" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/disney-pixal-cartoon-a) Model link: [View model](https://modelslab.com/models/disney-pixal-cartoon-a) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "disney-pixal-cartoon-a", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
tarteel-ai/whisper-base-ar-quran
tarteel-ai
"2022-12-13T16:49:54Z"
6,114
21
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-12-08T21:04:00Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-base-ar-quran results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-ar-quran This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0839 - Wer: 5.7544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1092 | 0.05 | 250 | 0.1969 | 13.3890 | | 0.0361 | 0.1 | 500 | 0.1583 | 10.6375 | | 0.0192 | 0.15 | 750 | 0.1109 | 8.8468 | | 0.0144 | 0.2 | 1000 | 0.1157 | 7.9754 | | 0.008 | 0.25 | 1250 | 0.1000 | 7.5360 | | 0.0048 | 1.03 | 1500 | 0.0933 | 6.8227 | | 0.0113 | 1.08 | 1750 | 0.0955 | 6.9638 | | 0.0209 | 1.13 | 2000 | 0.0824 | 6.3586 | | 0.0043 | 1.18 | 2250 | 0.0830 | 6.3444 | | 0.002 | 1.23 | 2500 | 0.1015 | 6.3025 | | 0.0013 | 2.01 | 2750 | 0.0863 | 6.0639 | | 0.0014 | 2.06 | 3000 | 0.0905 | 6.0213 | | 0.0018 | 2.11 | 3250 | 0.0864 | 6.0293 | | 0.0008 | 2.16 | 3500 | 0.0887 | 5.9308 | | 0.0029 | 2.21 | 3750 | 0.0777 | 5.9159 | | 0.0022 | 2.26 | 4000 | 0.0847 | 5.8749 | | 0.0005 | 3.05 | 4250 | 0.0827 | 5.8352 | | 0.0003 | 3.1 | 4500 | 0.0826 | 5.7800 | | 0.0006 | 3.15 | 4750 | 0.0833 | 5.7625 | | 0.0003 | 3.2 | 5000 | 0.0839 | 5.7544 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
mradermacher/Sycho-8B-L3-RP-i1-GGUF
mradermacher
"2024-06-18T11:10:02Z"
6,109
1
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:Hastagaras/Sycho-8B-L3-RP", "endpoints_compatible", "region:us" ]
null
"2024-06-18T07:43:14Z"
--- base_model: Hastagaras/Sycho-8B-L3-RP language: - en library_name: transformers quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> weighted/imatrix quants of https://huggingface.co/Hastagaras/Sycho-8B-L3-RP <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Sycho-8B-L3-RP-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Sycho-8B-L3-RP-i1-GGUF/resolve/main/Sycho-8B-L3-RP.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
thibautweber/fraud_detection
thibautweber
"2024-06-28T15:36:43Z"
6,107
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "mistral", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-27T22:22:17Z"
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** thibautweber - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Helsinki-NLP/opus-mt-en-hu
Helsinki-NLP
"2023-08-16T11:29:54Z"
6,105
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-hu * source languages: en * target languages: hu * OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.hu | 40.1 | 0.628 |
togethercomputer/RedPajama-INCITE-Chat-3B-v1
togethercomputer
"2023-05-09T14:59:47Z"
6,104
147
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:OpenAssistant/oasst1", "dataset:databricks/databricks-dolly-15k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-05T05:06:37Z"
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T - OpenAssistant/oasst1 - databricks/databricks-dolly-15k widget: - text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:" example_title: "Email Writing" - text: "<human>: Create a list of things to do in San Francisco\n<bot>:" example_title: "Brainstorming" inference: parameters: temperature: 0.7 top_p: 0.7 top_k: 50 max_new_tokens: 128 --- # RedPajama-INCITE-Chat-3B-v1 RedPajama-INCITE-Chat-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability. - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. # Quick Start Please note that the model requires `transformers` version >= 4.25.1. To prompt the chat model, use the following format: ``` <human>: [Instruction] <bot>: ``` ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Turing was a British mathematician, logician, cryptologist, and computer scientist. He is widely regarded as the father of computer science and artificial intelligence. """ ``` ## GPU Inference in Int8 This requires a GPU with 6GB memory. To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command: ```bash pip install accelerate pip install bitsandbytes ``` Then you can run inference with int8 as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Turing was a British mathematician and computer scientist who made important contributions to computer science and mathematical logic. He is widely regarded as the father of computer science and artificial intelligence for his work on the Turing machine and Turing test. """ ``` ## CPU Inference ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.bfloat16) # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Turing was a British mathematician and computer scientist who made important contributions to the fields of mathematics, cryptography, and computer science. He is widely regarded as the father of computer science and artificial intelligence. """ ``` Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference. # Uses Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner. #### Out-of-Scope Use `RedPajama-INCITE-Chat-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope. For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society. It is important to consider the limitations of the model and to only use it for its intended purpose. #### Misuse and Malicious Use `RedPajama-INCITE-Chat-3B-v1` is designed for language modeling. Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating fake news, misinformation, or propaganda - Promoting hate speech, discrimination, or violence against individuals or groups - Impersonating individuals or organizations without their consent - Engaging in cyberbullying or harassment - Defamatory content - Spamming or scamming - Sharing confidential or sensitive information without proper authorization - Violating the terms of use of the model or the data used to train it - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming ## Limitations `RedPajama-INCITE-Chat-3B-v1`, like other language models, has limitations that should be taken into consideration. For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data. We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot. ## Training **Training Data** Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) **Training Procedure** - **Hardware:** 8 A100 - **Optimizer:** Adam - **Gradient Accumulations**: 1 - **Num of Tokens:** 131M tokens - **Learning rate:** 1e-5 ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
epfl-llm/meditron-7b
epfl-llm
"2023-12-07T19:38:26Z"
6,102
239
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:epfl-llm/guidelines", "arxiv:2311.16079", "base_model:meta-llama/Llama-2-7b", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-08T16:03:23Z"
--- license: llama2 language: - en metrics: - accuracy - perplexity datasets: - epfl-llm/guidelines base_model: meta-llama/Llama-2-7b --- <img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo"> # Model Card for Meditron-7B-v1.0 Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-7B is a 7 billion parameters model adapted to the medical domain from Llama-2-7B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). Meditron-7B, finetuned on relevant training data, outperforms Llama-2-7B and PMC-Llama on multiple medical reasoning tasks. <details open> <summary><strong>Advisory Notice</strong></summary> <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;"> While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. </blockquote> </details> ## Model Details - **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm) - **Model type:** Causal decoder-only transformer language model - **Language(s):** English (mainly) - **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Code License:** [APACHE 2.0 LICENSE](LICENSE) - **Continue-pretrained from model:** [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b) - **Context length:** 2K tokens - **Input:** Text-only data - **Output:** Model generates text only - **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. - **Knowledge Cutoff:** August 2023 ### Model Sources - **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron) - **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)* ## Uses Meditron-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query ### Direct Use It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people. ### Downstream Use Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications. There are two ways we have used this model for downstream question-answering tasks. 1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt. 2. We finetuned the models for downstream question-answering tasks using specific training sets. We encourage and look forward to the adaption of the base model for more diverse applications. If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation. You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example: <img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt"> ### Out-of-Scope Use We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise. ## Truthfulness, Helpfulness, Risk, and Bias <!-- This section is meant to convey both technical and sociotechnical limitations. --> We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models. We use TruthfulQA (multiple choice) as the main evaluation benchmark. We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science. For 7B models, we perform one-shot evaluations for consistent answer generation. For 70B models, the evaluations are under the zero-shot setting. Below, we report the detailed truthfulness performance of each category. | | | | | | | | | | --- | ------ |----- |----- |----- |----- |----- |----- | |Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b | |Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 | |Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 | |Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 | |Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 | |Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 | | | | | | | | | For a more detailed performance analysis, please see our paper. Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model. Please recognize that our evaluation on Meditron-7B's helpfulness, risk, and bias are highly limited. Thus, as we noted in the safety notice, we strongly against any deployment in medical applications without further alignment process and rigorous evaluation! ### Recommendations **IMPORTANT!** Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine. ## Training Details ### Training Data Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora: - [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations. - **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers. - **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers. - **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <img width=75% src="gap-replay.png" alt="Alt text" title="Meditron-logo"> #### Data Preprocessing Please see the detailed preprocessing procedure in our paper. ### Training Procedure We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency. Hardware consists of 1 node of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM. Our three way parallelism scheme uses: - Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2, - Pipeline Parallelism (PP -- different GPUs process different layers) of 4, - Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 1. #### Training Hyperparameters | | | | --- | ------ | | bf16 | true | | lr | 3e-4 | | eps | 1e-5 | | betas | \[0.9, 0.95\] | | clip_grad | 1 | | weight decay | 0.1 | | DP size | 16 | | TP size | 4 | | PP size | 1 | | seq length | 2048 | | lr scheduler | cosine| | min lr | 1e-6 | | warmup iteration | 2000 | | micro batch size | 10 | | global batch size | 1600 | | | | #### Sizes The model was trained in September 2023. The model architecture is exactly Llama 2, meaning | | | | --- | ------ | | Model size | 7B | | Hidden dimension | 4096 | | Num. attention heads | 32 | | Num. layers | 32 | | | | ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results We finetune meditron-7b, llama-2-7b, pmc-llama-7b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually. We report the finetuned models' performance with top token selection as the inference mode. For MMLU-Medical, models finetuned on MedMCQA are used for inference. For MedQA-4-Option, models finetuned on MedQA are used for inference. For a more detailed performance analysis, please see our paper. | | | | | | | | --- | ------ |----- |----- |----- |----- | |Dataset | meditron-7b | llama-2-7b | pmc-llama-7b | Zephyr-7B-beta* | Mistral-7B-instruct* | |MMLU-Medical | 54.2 | 53.7 | 56.4 | 63.3 | 60.0 | |PubMedQA | 74.4 | 61.8 | 59.2 | 46.0 | 17.8 | |MedMCQA | 59.2 | 54.4 | 57.6 | 43.0 | 40.2 | |MedQA | 47.9 | 44.0 | 42.4 | 42.8 | 32.4 | |MedQA-4-Option| 52.0 | 49.6 | 49.2 | 48.5 | 41.1 | |Avg | 57.5 | 52.7 | 53.0 | 48.7 | 38.3 | | | | | | | | **Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 8 x NVIDIA A100 (80GB) SXM - **Total GPU hours:** 588.8 - **Hardware Provider:** EPFL Research Computing Platform - **Compute Region:** Switzerland - **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 73.6 hours of 8 A100s means 588.8 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.5, total emissions are estimated to be: (400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 73.6 h * 8 GPU) * 1.8 PUE = 6.8 kgCO2. ## Citation **BibTeX:** If you use Meditron or its training data, please cite our work: ``` @misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} } @software{epfmedtrn, author = {Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models}, month = November, year = 2023, url = {https://github.com/epfLLM/meditron} } ```
01-ai/Yi-6B-Chat
01-ai
"2024-06-26T10:25:33Z"
6,102
61
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-22T09:52:53Z"
--- license: apache-2.0 widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary> </details> <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download | |---|---| |Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) | |Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) | |Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) | |Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) | |Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) | |Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| |Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)| |Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)| |Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | |Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) | <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [FAQ](#faq) - [Learning hub](#learning-hub) ## Quick start Getting up and running with Yi models is simple with multiple choices available. ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference. <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### FAQ <details> <summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary> <br> #### 💡Fine-tuning - <strong>Base model or Chat model - which to fine-tune?</strong> <br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong> <br> The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. #### 💡Quantization - <strong>Quantized model versus original model - what is the performance gap?</strong> - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. #### 💡General - <strong>Where can I source fine-tuning question answering datasets?</strong> - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available. - Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets. - <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong> <br> The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong> <br> If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat). </details> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### Blog tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) | | [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) | | [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) | | [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) | | [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) | | [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) | | [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) | | [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 | | [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) | | [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 | | [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) | | [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) | | [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) | | [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) | | [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) | | [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) | | [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) | | [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) | | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) | | [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) | | [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) | | [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) | | [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) | | [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) | | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | ##### GitHub Project | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) | | [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) | | [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) | | [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) | | [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) | ##### Video tutorials | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) | | [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) | | [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) | | [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) | | [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) | | [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) | | [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) | | [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) | | [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) | | [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) | | [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) | | [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) | | [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) | | [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) | | [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) | | [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) | | [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) | | [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) | | [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) | | [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) | | [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) | | [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) | | [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) | | [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) | | [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) | | [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) | | [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) | | [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) | | [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
pjh64/Phi-3-mini-128K-Instruct.gguf
pjh64
"2024-04-24T08:46:24Z"
6,097
5
null
[ "gguf", "license:mit", "region:us" ]
null
"2024-04-23T22:58:54Z"
--- license: mit ---
bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12
bionlp
"2021-09-24T07:45:33Z"
6,095
5
transformers
[ "transformers", "pytorch", "bluebert", "en", "dataset:pubmed", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en tags: - bluebert license: cc0-1.0 datasets: - pubmed --- # BlueBert-Base, Uncased, PubMed ## Model description A BERT model pre-trained on PubMed abstracts ## Intended uses & limitations #### How to use Please see https://github.com/ncbi-nlp/bluebert ## Training data We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models. The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/). Pre-trained model: https://huggingface.co/bert-base-uncased ## Training procedure * lowercasing the text * removing speical chars `\x00`-`\x7F` * tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html) Below is a code snippet for more details. ```python value = value.lower() value = re.sub(r'[\r\n]+', ' ', value) value = re.sub(r'[^\x00-\x7F]+', ' ', value) tokenized = TreebankWordTokenizer().tokenize(value) sentence = ' '.join(tokenized) sentence = re.sub(r"\s's\b", "'s", sentence) ``` ### BibTeX entry and citation info ```bibtex @InProceedings{peng2019transfer, author = {Yifan Peng and Shankai Yan and Zhiyong Lu}, title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets}, booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)}, year = {2019}, pages = {58--65}, } ```
bartowski/stable-code-instruct-3b-GGUF
bartowski
"2024-03-25T21:39:06Z"
6,087
15
transformers
[ "transformers", "gguf", "causal-lm", "code", "text-generation", "en", "license:other", "model-index", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-25T21:34:43Z"
--- license: other language: - en tags: - causal-lm - code metrics: - code_eval library_name: transformers model-index: - name: stabilityai/stable-code-instruct-3b results: - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Python) metrics: - name: pass@1 type: pass@1 value: 32.4 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (C++) metrics: - name: pass@1 type: pass@1 value: 30.9 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Java) metrics: - name: pass@1 type: pass@1 value: 32.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (JavaScript) metrics: - name: pass@1 type: pass@1 value: 32.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (PHP) metrics: - name: pass@1 type: pass@1 value: 24.2 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Rust) metrics: - name: pass@1 type: pass@1 value: 23.0 verified: false quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of stable-code-instruct-3b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization. Original model: https://huggingface.co/stabilityai/stable-code-instruct-3b Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [stable-code-instruct-3b-Q8_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q8_0.gguf) | Q8_0 | 2.97GB | Extremely high quality, generally unneeded but max available quant. | | [stable-code-instruct-3b-Q6_K.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q6_K.gguf) | Q6_K | 2.29GB | Very high quality, near perfect, *recommended*. | | [stable-code-instruct-3b-Q5_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_K_M.gguf) | Q5_K_M | 1.99GB | High quality, very usable. | | [stable-code-instruct-3b-Q5_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_K_S.gguf) | Q5_K_S | 1.94GB | High quality, very usable. | | [stable-code-instruct-3b-Q5_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_0.gguf) | Q5_0 | 1.94GB | High quality, older format, generally not recommended. | | [stable-code-instruct-3b-Q4_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_K_M.gguf) | Q4_K_M | 1.70GB | Good quality, similar to 4.25 bpw. | | [stable-code-instruct-3b-Q4_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_K_S.gguf) | Q4_K_S | 1.62GB | Slightly lower quality with small space savings. | | [stable-code-instruct-3b-IQ4_NL.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ4_NL.gguf) | IQ4_NL | 1.61GB | Good quality, similar to Q4_K_S, new method of quanting, | | [stable-code-instruct-3b-IQ4_XS.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ4_XS.gguf) | IQ4_XS | 1.53GB | Decent quality, new method with similar performance to Q4. | | [stable-code-instruct-3b-Q4_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_0.gguf) | Q4_0 | 1.60GB | Decent quality, older format, generally not recommended. | | [stable-code-instruct-3b-IQ3_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ3_M.gguf) | IQ3_M | 1.31GB | Medium-low quality, new method with decent performance. | | [stable-code-instruct-3b-IQ3_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ3_S.gguf) | IQ3_S | 1.25GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [stable-code-instruct-3b-Q3_K_L.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_L.gguf) | Q3_K_L | 1.50GB | Lower quality but usable, good for low RAM availability. | | [stable-code-instruct-3b-Q3_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_M.gguf) | Q3_K_M | 1.39GB | Even lower quality. | | [stable-code-instruct-3b-Q3_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_S.gguf) | Q3_K_S | 1.25GB | Low quality, not recommended. | | [stable-code-instruct-3b-Q2_K.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q2_K.gguf) | Q2_K | 1.08GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
TencentARC
"2023-09-08T02:04:56Z"
6,086
22
diffusers
[ "diffusers", "safetensors", "art", "t2i-adapter", "image-to-image", "stable-diffusion-xl-diffusers", "stable-diffusion-xl", "arxiv:2302.08453", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
image-to-image
"2023-09-03T14:40:32Z"
--- license: apache-2.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - art - t2i-adapter - image-to-image - stable-diffusion-xl-diffusers - stable-diffusion-xl --- # T2I-Adapter-SDXL - Depth-Zoe T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/). ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Model complexity:** | | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL | | --- | --- |--- |--- |--- | | Parameters | 860M | 2.6B |77 M | 77/79 M | | - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>| |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>| |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>| |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>| |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>| |[TencentARC/t2i-adapter-openpose-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>| ## Example To get started, first install the required dependencies: ```bash pip install -U git+https://github.com/huggingface/diffusers.git pip install -U controlnet_aux==0.0.7 timm==0.6.12 # for conditioning models and detectors pip install transformers accelerate safetensors ``` 1. Images are first downloaded into the appropriate *control image* format. 2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125). Let's have a look at a simple example using the [Depth-zoe Adapter](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0). - Dependency ```py from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL from diffusers.utils import load_image, make_image_grid from controlnet_aux import ZoeDetector import torch # load adapter adapter = T2IAdapter.from_pretrained( "TencentARC/t2i-adapter-depth-zoe-sdxl-1.0", torch_dtype=torch.float16, varient="fp16" ).to("cuda") # load euler_a scheduler model_id = 'stabilityai/stable-diffusion-xl-base-1.0' euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLAdapterPipeline.from_pretrained( model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16", ).to("cuda") pipe.enable_xformers_memory_efficient_attention() zoe_depth = ZoeDetector.from_pretrained( "valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk" ).to("cuda") ``` - Condition Image ```py url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_zeo.jpg" image = load_image(url) image = zoe_depth(image, gamma_corrected=True, detect_resolution=512, image_resolution=1024) ``` <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a> - Generation ```py prompt = "A photo of a orchid, 4k photo, highly detailed" negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured" gen_images = pipe( prompt=prompt, negative_prompt=negative_prompt, image=image, num_inference_steps=30, adapter_conditioning_scale=1, guidance_scale=7.5, ).images[0] gen_images.save('out_zoe.png') ``` <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a> ### Training Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md). The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with - Training steps: 25000 - Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`. - Learning rate: Constant learning rate of `1e-5`. - Mixed precision: fp16
sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base
sentence-transformers
"2024-05-07T15:47:28Z"
6,085
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base This is a port of the [DPR Model](https://github.com/facebookresearch/DPR) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base') model = AutoModel.from_pretrained('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Have a look at: [DPR Model](https://github.com/facebookresearch/DPR)
shanchen/llama3-8B-slerp-med-chinese
shanchen
"2024-04-30T20:01:48Z"
6,085
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "winninghealth/WiNGPT2-Llama-3-8B-Base", "johnsnowlabs/JSL-MedLlama-3-8B-v1.0", "zh", "en", "fr", "base_model:winninghealth/WiNGPT2-Llama-3-8B-Base", "base_model:johnsnowlabs/JSL-MedLlama-3-8B-v1.0", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-30T19:48:40Z"
--- tags: - merge - mergekit - lazymergekit - winninghealth/WiNGPT2-Llama-3-8B-Base - johnsnowlabs/JSL-MedLlama-3-8B-v1.0 base_model: - winninghealth/WiNGPT2-Llama-3-8B-Base - johnsnowlabs/JSL-MedLlama-3-8B-v1.0 license: llama3 language: - zh - en - fr --- # llama3-8B-slerp-med-chinese llama3-8B-slerp-med-chinese is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [winninghealth/WiNGPT2-Llama-3-8B-Base](https://huggingface.co/winninghealth/WiNGPT2-Llama-3-8B-Base) * [johnsnowlabs/JSL-MedLlama-3-8B-v1.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v1.0) ## 🧩 Configuration ```yaml slices: - sources: - model: winninghealth/WiNGPT2-Llama-3-8B-Base layer_range: [0,32] - model: johnsnowlabs/JSL-MedLlama-3-8B-v1.0 layer_range: [0,32] merge_method: slerp base_model: winninghealth/WiNGPT2-Llama-3-8B-Base parameters: t: - filter: self_attn value: [0, 0.5, 0.5, 0.5, 1] - filter: mlp value: [1, 0.5, 0.5, 0.5, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "shanchen/llama3-8B-slerp-med-chinese" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
yaniseuranova/setfit-rag-hybrid-search-query-router
yaniseuranova
"2024-06-25T14:28:07Z"
6,078
1
setfit
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-m3", "model-index", "region:us" ]
text-classification
"2024-05-28T13:03:44Z"
--- base_model: BAAI/bge-m3 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: How does technology impact our daily lives and what benefits can it bring to various activities? - text: How do organizations effectively deploy and manage machine learning algorithms to drive business value? - text: What are the key considerations for organizing and managing computer lab resources and tracking their status? - text: How can batch processing improve the efficiency of data lake operations? - text: What is the purpose of setting up a CUPS on a server? inference: true model-index: - name: SetFit with BAAI/bge-m3 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8947368421052632 name: Accuracy --- # SetFit with BAAI/bge-m3 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 8192 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | lexical | <ul><li>"How does Happeo's search AI work to provide answers to user queries?"</li><li>'What are the primary areas of focus in the domain of Data Science and Analysis?'</li><li>'How can one organize a running event in Belgium?'</li></ul> | | semantic | <ul><li>'What changes can be made to a channel header?'</li><li>'How can hardware capabilities impact the accuracy of motion and object detections?'</li><li>'Who is responsible for managing guarantees and prolongations?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8947 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("yaniseuranova/setfit-rag-hybrid-search-query-router") # Run inference preds = model("What is the purpose of setting up a CUPS on a server?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 13.7407 | 28 | | Label | Training Sample Count | |:---------|:----------------------| | lexical | 44 | | semantic | 118 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:--------:|:-------------:|:---------------:| | 0.0005 | 1 | 0.257 | - | | 0.0250 | 50 | 0.1944 | - | | 0.0499 | 100 | 0.2383 | - | | 0.0749 | 150 | 0.1279 | - | | 0.0999 | 200 | 0.0033 | - | | 0.1248 | 250 | 0.0021 | - | | 0.1498 | 300 | 0.0012 | - | | 0.1747 | 350 | 0.0008 | - | | 0.1997 | 400 | 0.0004 | - | | 0.2247 | 450 | 0.0006 | - | | 0.2496 | 500 | 0.0005 | - | | 0.2746 | 550 | 0.0003 | - | | 0.2996 | 600 | 0.0003 | - | | 0.3245 | 650 | 0.0003 | - | | 0.3495 | 700 | 0.0004 | - | | 0.3744 | 750 | 0.0005 | - | | 0.3994 | 800 | 0.0003 | - | | 0.4244 | 850 | 0.0002 | - | | 0.4493 | 900 | 0.0002 | - | | 0.4743 | 950 | 0.0002 | - | | 0.4993 | 1000 | 0.0001 | - | | 0.5242 | 1050 | 0.0001 | - | | 0.5492 | 1100 | 0.0001 | - | | 0.5741 | 1150 | 0.0002 | - | | 0.5991 | 1200 | 0.0001 | - | | 0.6241 | 1250 | 0.0003 | - | | 0.6490 | 1300 | 0.0002 | - | | 0.6740 | 1350 | 0.0001 | - | | 0.6990 | 1400 | 0.0003 | - | | 0.7239 | 1450 | 0.0001 | - | | 0.7489 | 1500 | 0.0002 | - | | 0.7738 | 1550 | 0.0001 | - | | 0.7988 | 1600 | 0.0002 | - | | 0.8238 | 1650 | 0.0002 | - | | 0.8487 | 1700 | 0.0002 | - | | 0.8737 | 1750 | 0.0002 | - | | 0.8987 | 1800 | 0.0003 | - | | 0.9236 | 1850 | 0.0001 | - | | 0.9486 | 1900 | 0.0001 | - | | 0.9735 | 1950 | 0.0001 | - | | 0.9985 | 2000 | 0.0001 | - | | **1.0** | **2003** | **-** | **0.1735** | | 1.0235 | 2050 | 0.0001 | - | | 1.0484 | 2100 | 0.0001 | - | | 1.0734 | 2150 | 0.0001 | - | | 1.0984 | 2200 | 0.0 | - | | 1.1233 | 2250 | 0.0001 | - | | 1.1483 | 2300 | 0.0001 | - | | 1.1732 | 2350 | 0.0001 | - | | 1.1982 | 2400 | 0.0002 | - | | 1.2232 | 2450 | 0.0001 | - | | 1.2481 | 2500 | 0.0 | - | | 1.2731 | 2550 | 0.0001 | - | | 1.2981 | 2600 | 0.0001 | - | | 1.3230 | 2650 | 0.0 | - | | 1.3480 | 2700 | 0.0001 | - | | 1.3729 | 2750 | 0.0001 | - | | 1.3979 | 2800 | 0.0001 | - | | 1.4229 | 2850 | 0.0 | - | | 1.4478 | 2900 | 0.0001 | - | | 1.4728 | 2950 | 0.0001 | - | | 1.4978 | 3000 | 0.0001 | - | | 1.5227 | 3050 | 0.0001 | - | | 1.5477 | 3100 | 0.0 | - | | 1.5726 | 3150 | 0.0 | - | | 1.5976 | 3200 | 0.0001 | - | | 1.6226 | 3250 | 0.0001 | - | | 1.6475 | 3300 | 0.0001 | - | | 1.6725 | 3350 | 0.0001 | - | | 1.6975 | 3400 | 0.0001 | - | | 1.7224 | 3450 | 0.0 | - | | 1.7474 | 3500 | 0.0002 | - | | 1.7723 | 3550 | 0.0001 | - | | 1.7973 | 3600 | 0.0 | - | | 1.8223 | 3650 | 0.0 | - | | 1.8472 | 3700 | 0.0001 | - | | 1.8722 | 3750 | 0.0 | - | | 1.8972 | 3800 | 0.0001 | - | | 1.9221 | 3850 | 0.0 | - | | 1.9471 | 3900 | 0.0 | - | | 1.9720 | 3950 | 0.0001 | - | | 1.9970 | 4000 | 0.0 | - | | 2.0 | 4006 | - | 0.2593 | | 2.0220 | 4050 | 0.0001 | - | | 2.0469 | 4100 | 0.0001 | - | | 2.0719 | 4150 | 0.0 | - | | 2.0969 | 4200 | 0.0001 | - | | 2.1218 | 4250 | 0.0 | - | | 2.1468 | 4300 | 0.0001 | - | | 2.1717 | 4350 | 0.0001 | - | | 2.1967 | 4400 | 0.0001 | - | | 2.2217 | 4450 | 0.0001 | - | | 2.2466 | 4500 | 0.0001 | - | | 2.2716 | 4550 | 0.0 | - | | 2.2966 | 4600 | 0.0 | - | | 2.3215 | 4650 | 0.0 | - | | 2.3465 | 4700 | 0.0001 | - | | 2.3714 | 4750 | 0.0001 | - | | 2.3964 | 4800 | 0.0002 | - | | 2.4214 | 4850 | 0.0001 | - | | 2.4463 | 4900 | 0.0001 | - | | 2.4713 | 4950 | 0.0 | - | | 2.4963 | 5000 | 0.0001 | - | | 2.5212 | 5050 | 0.0001 | - | | 2.5462 | 5100 | 0.0 | - | | 2.5711 | 5150 | 0.0001 | - | | 2.5961 | 5200 | 0.0 | - | | 2.6211 | 5250 | 0.0 | - | | 2.6460 | 5300 | 0.0 | - | | 2.6710 | 5350 | 0.0 | - | | 2.6960 | 5400 | 0.0 | - | | 2.7209 | 5450 | 0.0 | - | | 2.7459 | 5500 | 0.0 | - | | 2.7708 | 5550 | 0.0 | - | | 2.7958 | 5600 | 0.0001 | - | | 2.8208 | 5650 | 0.0 | - | | 2.8457 | 5700 | 0.0 | - | | 2.8707 | 5750 | 0.0 | - | | 2.8957 | 5800 | 0.0 | - | | 2.9206 | 5850 | 0.0 | - | | 2.9456 | 5900 | 0.0001 | - | | 2.9705 | 5950 | 0.0 | - | | 2.9955 | 6000 | 0.0 | - | | 3.0 | 6009 | - | 0.2738 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.39.0 - PyTorch: 2.3.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf
RichardErkhov
"2024-06-25T06:04:51Z"
6,073
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T01:57:07Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2b-coder - GGUF - Model creator: https://huggingface.co/MAISAAI/ - Original model: https://huggingface.co/MAISAAI/gemma-2b-coder/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gemma-2b-coder.Q2_K.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q2_K.gguf) | Q2_K | 1.08GB | | [gemma-2b-coder.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.IQ3_XS.gguf) | IQ3_XS | 1.16GB | | [gemma-2b-coder.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.IQ3_S.gguf) | IQ3_S | 1.2GB | | [gemma-2b-coder.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q3_K_S.gguf) | Q3_K_S | 1.2GB | | [gemma-2b-coder.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.IQ3_M.gguf) | IQ3_M | 1.22GB | | [gemma-2b-coder.Q3_K.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q3_K.gguf) | Q3_K | 1.29GB | | [gemma-2b-coder.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q3_K_M.gguf) | Q3_K_M | 1.29GB | | [gemma-2b-coder.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q3_K_L.gguf) | Q3_K_L | 1.36GB | | [gemma-2b-coder.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.IQ4_XS.gguf) | IQ4_XS | 1.4GB | | [gemma-2b-coder.Q4_0.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q4_0.gguf) | Q4_0 | 1.44GB | | [gemma-2b-coder.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.IQ4_NL.gguf) | IQ4_NL | 1.45GB | | [gemma-2b-coder.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q4_K_S.gguf) | Q4_K_S | 1.45GB | | [gemma-2b-coder.Q4_K.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q4_K.gguf) | Q4_K | 1.52GB | | [gemma-2b-coder.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q4_K_M.gguf) | Q4_K_M | 1.52GB | | [gemma-2b-coder.Q4_1.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q4_1.gguf) | Q4_1 | 1.56GB | | [gemma-2b-coder.Q5_0.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q5_0.gguf) | Q5_0 | 1.68GB | | [gemma-2b-coder.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q5_K_S.gguf) | Q5_K_S | 1.68GB | | [gemma-2b-coder.Q5_K.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q5_K.gguf) | Q5_K | 1.71GB | | [gemma-2b-coder.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q5_K_M.gguf) | Q5_K_M | 1.71GB | | [gemma-2b-coder.Q5_1.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q5_1.gguf) | Q5_1 | 1.79GB | | [gemma-2b-coder.Q6_K.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q6_K.gguf) | Q6_K | 1.92GB | | [gemma-2b-coder.Q8_0.gguf](https://huggingface.co/RichardErkhov/MAISAAI_-_gemma-2b-coder-gguf/blob/main/gemma-2b-coder.Q8_0.gguf) | Q8_0 | 2.49GB | Original model description: --- tags: - generated_from_trainer - code - coding - gemma model-index: - name: gemma-2b-coder results: [] license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms language: - code thumbnail: https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png datasets: - HuggingFaceH4/CodeAlpaca_20K pipeline_tag: text-generation --- <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png" alt="gemma coder logo""> </div> # Gemma Coder 👩‍💻 **Gemma 2B** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library. ## Model description 🧠 [Gemma-2b](https://huggingface.co/google/gemma-2b) Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ## Training and evaluation data 📚 [CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model. ### Training hyperparameters ⚙ Training took 1h 40 min on Free Colab T4 GPU (16GB VRAM) with the following params: ```py num_train_epochs=2, per_device_train_batch_size=2, per_device_eval_batch_size=1, gradient_accumulation_steps=32 learning_rate=2.5e-5, optim="paged_adamw_8bit", logging_steps=5, seed=66, load_best_model_at_end=True, save_strategy="steps", save_steps=50, evaluation_strategy="steps", eval_steps=50, save_total_limit=2, remove_unused_columns=True, fp16=True, bf16=False ``` ### Training results 🗒️ | Step | Training Loss | Validation Loss | |------|---------------|-----------------| | 50 | 1.467800 | 1.450770 | | 100 | 1.060000 | 1.064840 | | 150 | 0.900200 | 0.922290 | | 200 | 0.848400 | 0.879911 | | 250 | 0.838100 | 0.867354 | ### Eval results 📊 WIP ### Example of usage 👩‍💻 I recommend install the following version of `torch`: ```sh pip install "torch>=2.1.1" -U ``` ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model_id = "MAISAAI/gemma-2b-coder" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda") def generate( instruction, max_new_tokens=256, temperature=0.1, top_p=0.75, top_k=40, num_beams=2, **kwargs, ): system = f"<bos><|system|>\nYou are a helpful coding assistant.<eos>\n" prompt = f"{system}<|user|>\n{instruction}<eos>\n<|assistant|>\n" inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s, skip_special_tokens=True) return output.split("<|assistant|>")[1] instruction = """ Edit the following XML code to add a navigation bar to the top of a web page <html> <head> <title>Maisa</title> </head> """ print(generate(instruction)) ``` ### Citation ```bibtex @misc {maisa_ai_2024, author = { {MAISA AI} }, title = { gemma-2b-coder (Revision e5e4e5b) }, year = 2024, url = { https://huggingface.co/MAISAAI/gemma-2b-coder }, doi = { 10.57967/hf/2208 }, publisher = { Hugging Face } } ```
RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf
RichardErkhov
"2024-06-26T13:45:39Z"
6,071
0
null
[ "gguf", "region:us" ]
null
"2024-06-26T11:56:14Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-8B-Stheno-v3.1 - GGUF - Model creator: https://huggingface.co/Sao10K/ - Original model: https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3-8B-Stheno-v3.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q2_K.gguf) | Q2_K | 2.96GB | | [L3-8B-Stheno-v3.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.IQ3_XS.gguf) | IQ3_XS | 1.21GB | | [L3-8B-Stheno-v3.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3-8B-Stheno-v3.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q3_K_S.gguf) | Q3_K_S | 0.57GB | | [L3-8B-Stheno-v3.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3-8B-Stheno-v3.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q3_K.gguf) | Q3_K | 3.74GB | | [L3-8B-Stheno-v3.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3-8B-Stheno-v3.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3-8B-Stheno-v3.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3-8B-Stheno-v3.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3-8B-Stheno-v3.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3-8B-Stheno-v3.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q4_K_S.gguf) | Q4_K_S | 3.03GB | | [L3-8B-Stheno-v3.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q4_K.gguf) | Q4_K | 2.55GB | | [L3-8B-Stheno-v3.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q4_K_M.gguf) | Q4_K_M | 0.26GB | | [L3-8B-Stheno-v3.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q4_1.gguf) | Q4_1 | 0.13GB | | [L3-8B-Stheno-v3.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q5_0.gguf) | Q5_0 | 0.01GB | | [L3-8B-Stheno-v3.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q5_K_S.gguf) | Q5_K_S | 0.03GB | | [L3-8B-Stheno-v3.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q5_K.gguf) | Q5_K | 0.0GB | | [L3-8B-Stheno-v3.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q5_K_M.gguf) | Q5_K_M | 0.0GB | | [L3-8B-Stheno-v3.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q5_1.gguf) | Q5_1 | 0.0GB | | [L3-8B-Stheno-v3.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q6_K.gguf) | Q6_K | 0.0GB | | [L3-8B-Stheno-v3.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.1-gguf/blob/main/L3-8B-Stheno-v3.1.Q8_0.gguf) | Q8_0 | 0.0GB | Original model description: --- language: - en license: cc-by-nc-4.0 --- <img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 80%; min-width: 400px; display: block; margin: auto;"> **Model: Llama-3-8B-Stheno-v3.1** > **NEWER VERSION IS OUT** <br> > > **Fixed common issues, [Try the updated v3.2 Instead!](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)** > > Really, do try version 3.2 This has been an experimental model I've been working on for a bit. Llama-3 was kind of difficult to work with. <br>I also had been hired to create a model for an Organisation, and I used the lessons I learnt from fine-tuning that one for this specific model. Unable to share that one though, unfortunately. <br>Made from outputs generated by Claude-3-Opus along with Human-Generated Data. Stheno-v3.1 \- A model made for 1-on-1 Roleplay ideally, but one that is able to handle scenarios, RPGs and storywriting fine. <br>\- Uncensored during actual roleplay scenarios. # I do not care for zero-shot prompting like what some people do. It is uncensored enough in actual usecases. <br>\- I quite like the prose and style for this model. #### Testing Notes <br>\- Known as L3-RP-v2.1 on Chaiverse, it did decently there [>1200 Elo] <br>\- Handles character personalities well. Great for 1 on 1 Roleplay sessions. <br>\- May need further token context & few-shot examples if using it as a Narrator / RPG Roleplaying session. It is able to handle them though. <br>\- A model leaning towards NSFW, mention explicitly in prompts if you want to steer away. [Avoid Negative Reinforcement] <br>\- Occasionally spits out leaking XML and nonsense. A regen / swipe instantly fixes that. <br>\- Unique / Varied Answers when Regenerating answers. Pretty cool? <br>\- Works best with *some* token context in the character card itself. A chef needs ingredients to cook, no? *** **Recommended Samplers:** ``` Temperature - 1.12 to 1.32 Min-P - 0.075 Top-K - 40 Repetition Penalty - 1.1 ``` **Stopping Strings:** ``` \n{{User}} # Or Equivalent, depending on Frontend <|eot_id|> <|end_of_text|> \n< # If there is leakage of XML tags in response. May happen Occasionally, Regenerate Answer as Needed. Happens rarely. ``` **Prompting Template - Llama-3-Instruct** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` **Basic Roleplay System Prompt** ``` You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}. ``` *** Support me here if you're interested. [Ko-Fi](https://ko-fi.com/sao10k) If not, that's fine too. Feedback would be nice. ``` Art by wada_kazu / わだかず (pixiv page private?) ``` ***
allenai/OLMo-1B
allenai
"2024-06-25T18:54:36Z"
6,070
100
transformers
[ "transformers", "pytorch", "safetensors", "hf_olmo", "text-generation", "custom_code", "en", "dataset:allenai/dolma", "arxiv:2402.00838", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-26T06:18:45Z"
--- license: apache-2.0 datasets: - allenai/dolma language: - en --- <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 1B <!-- Provide a quick summary of what the model is/does. --> **For transformers versions v4.40.0 or newer, we suggest using [OLMo 1B HF](https://huggingface.co/allenai/OLMo-1B-hf) instead.** OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 | | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 | We are releasing many checkpoints for these models, for every 1000 traing steps. The naming convention is `step1000-tokens4B`. In particular, we focus on four revisions of the 7B models: | Name | HF Repo | Model Revision | Tokens | Note | |------------|---------|----------------|-------------------|------| |OLMo 7B| [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|`main`| 2.5T|The base OLMo 7B model| |OLMo 7B (not annealed)|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|step556000-tokens2460B|2.5T| learning rate not annealed to 0| |OLMo 7B-2T|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)| step452000-tokens2000B |2T| OLMo checkpoint at 2T tokens| |OLMo-7B-Twin-2T|[allenai/OLMo-7B-Twin-2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T)|`main`|2T| Twin version on different hardware| To load a specific model revision with HuggingFace, simply add the argument `revision`: ```bash from hf_olmo import OLMoForCausalLM # pip install ai2-olmo olmo = OLMoForCausalLM.from_pretrained("allenai/OLMo-1B", revision="step20000-tokens84B") ``` All revisions/branches are listed in the file `revisions.txt`. Or, you can access all the revisions for the models via the following code snippet: ```python from huggingface_hub import list_repo_refs out = list_repo_refs("allenai/OLMo-1B") branches = [b.name for b in out.branches] ``` A few revisions were lost due to an error, but the vast majority are present. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Feb./March 2023 based on Dolma dataset version. ### Model Sources <!-- Provide the basic links for the model. --> - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580 - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-1B/reports/OLMo-1B--Vmlldzo2NzY1Njk1 <!-- - **Press release:** TODO --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Inference Quickly get inference running with the following required installation: ```bash pip install ai2-olmo ``` Now, proceed as usual with HuggingFace: ```python from hf_olmo import OLMoForCausalLM, OLMoTokenizerFast olmo = OLMoForCausalLM.from_pretrained("allenai/OLMo-1B") tokenizer = OLMoTokenizerFast.from_pretrained("allenai/OLMo-1B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False) # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> 'Language modeling is the first step to build natural language generation...' ``` You can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer. ```bash raise ImportError( ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo` ``` ### Fine-tuning Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available. 1. Fine-tune with the OLMo repository: ```bash torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \ --data.paths=[{path_to_data}/input_ids.npy] \ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \ --load_path={path_to_checkpoint} \ --reset_trainer_state ``` For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning). 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the 7B model are found below. | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) | | --------------------------------- | -------- | ---------- | --------- | ------ | ------- | | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 | | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 | | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 | | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 | | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 | | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 | | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 | | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 | | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 | | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 | | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 | | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) | | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 | And for the 1B model: | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) | | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | | copa | 50 | 84 | 72 | 78 | 79 | | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation. ### Architecture OLMo 7B architecture with peer models for comparison. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B | |------------------------|-------------------|---------------------|--------------------|--------------------|------------------| | d_model | 4096 | 4096 | 4096 | 4544 | 4096 | | num heads | 32 | 32 | 32 | 71 | 16 | | num layers | 32 | 32 | 32 | 32 | 32 | | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 | | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN | | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | | attention variant | full | GQA | full | MQA | MQA | | biases | none | none | in LN only | in LN only | none | | block type | sequential | sequential | sequential | parallel | parallel | | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU | | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 | | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 | | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M | | weight tying | no | no | no | no | yes | ### Hyperparameters AdamW optimizer parameters are shown below. | Size | Peak LR | Betas | Epsilon | Weight Decay | |------|------------|-----------------|-------------|--------------| | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 | | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 | Optimizer settings comparison with peer models. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 5000 | 2000 | 2000 | 1000 | | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 | | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 | | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | | beta1 | 0.9 | 0.9 | 0.9 | 0.99 | | beta2 | 0.95 | 0.95 | 0.95 | 0.999 | | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | | LR schedule | linear | cosine | cosine | cosine | | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 | | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 | ## Environmental Impact OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* | | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan or Akshita, `{nathanl, akshitab} at allenai dot org`.
parler-tts/parler-tts-mini-expresso
parler-tts
"2024-05-21T17:17:04Z"
6,068
65
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "text-to-speech", "annotation", "en", "dataset:ylacombe/expresso", "dataset:reach-vb/jenny_tts_dataset", "dataset:blabble-io/libritts_r", "arxiv:2402.01912", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-to-speech
"2024-05-15T13:50:19Z"
--- library_name: transformers tags: - text-to-speech - annotation license: apache-2.0 language: - en pipeline_tag: text-to-speech inference: false datasets: - ylacombe/expresso - reach-vb/jenny_tts_dataset - blabble-io/libritts_r --- <img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Parler-TTS Mini: Expresso <a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler-tts-expresso"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> **Parler-TTS Mini: Expresso** is a fine-tuned version of [Parler-TTS Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) on the [Expresso](https://huggingface.co/datasets/ylacombe/expresso) dataset. It is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech. Compared to the original model, Parler-TTS Expresso provides superior control over **emotions** (happy, confused, laughing, sad) and **consistent voices** (Jerry, Thomas, Elisabeth, Talia). It is part of the first release from the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code. Details for reproducing this entire training run are provided in the section [Training Procedure](#training-procedure). ## Usage Using Expresso is as simple as "bonjour". Simply install the library from source: ```sh pip install git+https://github.com/huggingface/parler-tts.git ``` You can then use the model with the following inference snippet: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer, set_seed import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-expresso").to(device) tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-expresso") prompt = "Why do you make me do these examples? They're *so* generic." description = "Thomas speaks moderately slowly in a sad tone with emphasis and high quality audio." input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) set_seed(42) generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) audio_arr = generation.cpu().numpy().squeeze() sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) ``` **Tips**: * Specify the name of a male speaker (Jerry, Thomas) or female speaker (Talia, Elisabeth) for consistent voices * The model can generate in a range of emotions, including: "happy", "confused", "default" (meaning no particular emotion conveyed), "laughing", "sad", "whisper", "emphasis" * Include the term "high quality audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech * To emphasise particular words, wrap them in asterisk (e.g. `*you*` in the example above) and include "emphasis" in the prompt ## Training Procedure Expresso is a high-quality, expressive speech dataset that includes samples from four speakers (two male, two female). By fine-tuning Parler-TTS Mini v0.1 on this dataset, we can train the model to follow emotion and speaker prompts. To reproduce this fine-tuning run, we need to perform two steps: 1. Create text descriptions from the audio samples in the Expresso dataset 2. Train the model on the (text, audio) pairs Step 1 is performed using the [DataSpeech](https://github.com/huggingface/dataspeech) library, and step 2 using [Parler-TTS](https://github.com/huggingface/parler-tts). Should you wish to use the pre-annotated dataset from our experiments, you can jump straight to [step 2](#step-2--fine-tune-the-model). For both, you can follow step 0 for getting set-up. ### Step 0: Set-Up We'll start by creating a fresh Python environment: ```sh python3 -m venv parler-env source parler-env/bin/activate ``` Next, install PyTorch according to the [official instructions](https://pytorch.org/get-started/locally/). We can then install DataSpeech and Parler-TTS sequentially: ```sh git clone [email protected]:huggingface/dataspeech.git && cd dataspeech && pip install -r requirements.txt cd .. git clone https://github.com/huggingface/parler-tts.git && cd parler-tts && pip install -e ."[train]" cd .. ``` You can link your Hugging Face account so that you can push model repositories on the Hub. This will allow you to save your trained models on the Hub so that you can share them with the community. Simply run the command: ```sh git config --global credential.helper store huggingface-cli login ``` And then enter an authentication token from https://huggingface.co/settings/tokens. Create a new token if you do not have one already. You should make sure that this token has "write" privileges. You also have the option to configure Accelerate by running the following command. Note that you should set the number of GPUs you wish to use for training/inference, and also the data type (dtype) based on your device (e.g. bfloat16 on A100 GPUs, float16 on V100 GPUs, etc.): ```sh accelerate config ``` Optionally, you can also login to Weights and Biases for automatic logging: ```sh wandb login ``` ### Step 1: Create Text Descriptions Creating text descriptions for the dataset comprises three sub-stages from DataSpeech, which we'll cover below. #### 1.A. Annotate the Expresso dataset We'll use the [`main.py`](https://github.com/huggingface/dataspeech/blob/main/main.py) file from DataSpeech to label the following continuous variables: - Speaking rate - Signal-to-noise ratio (SNR) - Reverberation - Speech monotony This can be done with the following command: ```sh python ./dataspeech/main.py "ylacombe/expresso" \ --configuration "default" \ --text_column_name "text" \ --audio_column_name "audio" \ --cpu_num_workers 8 \ --rename_column \ --repo_id "expresso-tags" ``` Note that the script will be faster if you have GPUs at your disposal. It will automatically scale up to every GPU available in your environment. To control which GPUs to run the script on consider indicating via `CUDA_VISIBLE_DEVICES` environment variable. The resulting dataset will be pushed to the Hugging Face Hub under your Hugging Face handle. Mine was pushed to [reach-vb/expresso-tags](https://huggingface.co/datasets/reach-vb/expresso-tags). We can see that the dataset is annotated with continuous features like "speaking_rate" and "snr". #### 1.B. Map annotations to text bins The next step involves mapping the continuous variables to discrete ones. This is achieved by binning the continuous variables into buckets, and assigning each one a text label. Since the ultimate goal here is to fine-tune the [Parler-TTS v0.1 checkpoint](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) on the Expresso dataset, we want to stay consistent with the text bins of the dataset on which the original model was trained. To do this, we'll pass [`v01_bin_edges.json`](https://github.com/huggingface/dataspeech/blob/main/examples/tags_to_annotations/v01_bin_edges.json) as an input argument to our script, which holds the bin edges from the original dataset: ```sh python ./dataspeech/scripts/metadata_to_text.py \ "reach-vb/expresso-tags" \ --repo_id "expresso-tags" \ --configuration "default" \ --cpu_num_workers "8" \ --path_to_bin_edges "./examples/tags_to_annotations/v01_bin_edges.json" \ --avoid_pitch_computation ``` Since we leverage the bins from the original dataset, the above script only takes a few seconds. The resulting dataset will be pushed to the Hugging Face Hub under your Hugging Face handle. Mine was pushed to [reach-vb/expresso-tags](https://huggingface.co/datasets/reach-vb/expresso-tags). You can notice that text bins such as "slightly noisy", "quite monotone" have been added to the samples. #### 1.C. Create natural language descriptions from those text bins Now that we have text bins associated to the Expresso dataset, the next step is to create natural language descriptions. This involves passing the text bins to a large-language model (LLM), and have it generate corresponding descriptions. There is a template [prompt creation script](https://github.com/huggingface/dataspeech/blob/main/scripts/run_prompt_creation.py) in Parler-TTS which can be used to generate descriptions from the features tagged in [step 1.A](#1a-annotate-the-expresso-dataset) (reverberation, noise, speaking rate, etc). However, not all of these features are relevant for the Expresso dataset. For instance, Expresso was recorded in a professional recording studio, so all the samples are high quality. Thus, we chose to create prompts with the following subset of features: 1. Name: we mapped the speaker ids (ex1, ex2, ex3, ex4) to unique speaker names (Jerry, Elisabeth, Thomas, Talia). This encourages the model to learn specific speakers from the training data 2. Emotion: we include the emotion provided in the Expresso dataset 3. Speaking rate: we use the pre-computed text bins from the previous step 4. In addition, we also hard-coded the quality of the audio to be "very high-quality", given the studio recording conditions. As an example, if we passed: 1. Speaker: Jerry 2. Emotion: confused 3. Speaking rate: moderate speed We would expect to generate a sample along the lines of: "Jerry speaks with a confused tone and at a moderate speed with high quality audio." The modified prompt creation script can be found in this repository. You can download this script with the following Python command: ```python from huggingface_hub import hf_hub_download hf_hub_download(repo_id="parler-tts/parler-tts-mini-expresso", filename="run_prompt_creation.py", local_dir="./run_prompt_creation_expresso.py") ``` You can then launch prompt creation using the [Mistral Instruct 7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model with the following command: ```sh accelerate launch ./dataspeech/run_prompt_creation_expresso.py \ --dataset_name "reach-vb/expresso-tags" \ --dataset_config_name "default" \ --model_name_or_path "mistralai/Mistral-7B-Instruct-v0.2" \ --per_device_eval_batch_size 32 \ --attn_implementation "sdpa" \ --dataloader_num_workers 8 \ --output_dir "./tmp_expresso" \ --load_in_4bit \ --push_to_hub \ --hub_dataset_id "expresso-tagged-w-speech-mistral" \ --preprocessing_num_workers 16 ``` Note that the Mistral model is gated, so you should ensure you have accepted the terms-of-use from the [model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). You can find the annotated dataset under TODO [reach-vb/expresso-tagged-w-speech-mistral](https://huggingface.co/datasets/reach-vb/expresso-tagged-w-speech-mistral), where you'll find sensible descriptions from the features that we passed. This step generally demands more resources and times and should use one or many GPUs. Scaling to multiple GPUs using [distributed data parallelism (DDP)](https://pytorch.org/tutorials/beginner/ddp_series_theory.html) is trivial: simply run `accelerate config` and select the multi-GPU option, specifying the IDs of the GPUs you wish to use. The above script can then be run using DDP with no code changes. If you are resource constrained and need to use a smaller model, [Gemma 2B](https://huggingface.co/google/gemma-2b-it) is an excellent choice. ### Step 2: Fine-Tune the Model Fine-tuning is performed using the Parler-TTS training script [run_parler_tts_training.py](https://github.com/huggingface/parler-tts/blob/main/training/run_parler_tts_training.py). It is the same script used to pre-train the model, and can be used for fine-tuning without any code-changes. To preserve the model's ability to generate speech with generic voice descriptions, such as in the style of [Parler-TTS Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1), we fine-tuned the model on a combination of three datasets, including the test split of LibriTTS-R: 1. [Expresso](https://huggingface.co/datasets/ylacombe/expresso) 2. [Jenny](https://huggingface.co/datasets/reach-vb/jenny_tts_dataset) 3. [LibriTTS-R](https://huggingface.co/datasets/blabble-io/libritts_r) This was achieved through the following command: ```sh accelerate launch ./training/run_parler_tts_training.py \ --model_name_or_path "parler-tts/parler_tts_mini_v0.1" \ --feature_extractor_name "parler-tts/dac_44khZ_8kbps" \ --description_tokenizer_name "parler-tts/parler_tts_mini_v0.1" \ --prompt_tokenizer_name "parler-tts/parler_tts_mini_v0.1" \ --report_to "wandb" \ --overwrite_output_dir true \ --train_dataset_name "ylacombe/expresso+reach-vb/jenny_tts_dataset+blabble-io/libritts_r+blabble-io/libritts_r" \ --train_metadata_dataset_name "reach-vb/expresso-tagged-w-speech-mistral-v3+ylacombe/jenny-tts-10k-tagged+parler-tts/libritts_r_tags_tagged_10k_generated+parler-tts/libritts_r_tags_tagged_10k_generated" \ --train_dataset_config_name "read+default+clean+other" \ --train_split_name "train+train[:20%]+test.clean+test.other" \ --eval_dataset_name "ylacombe/expresso+reach-vb/jenny_tts_dataset+blabble-io/libritts_r+blabble-io/libritts_r" \ --eval_metadata_dataset_name "reach-vb/expresso-tagged-w-speech-mistral-v3+ylacombe/jenny-tts-10k-tagged+parler-tts/libritts_r_tags_tagged_10k_generated+parler-tts/libritts_r_tags_tagged_10k_generated" \ --eval_dataset_config_name "read+default+clean+other" \ --eval_split_name "train+train[:20%]+test.clean+test.other" \ --max_eval_samples 8 \ --per_device_eval_batch_size 16 \ --target_audio_column_name "audio" \ --description_column_name "text_description" \ --prompt_column_name "text" \ --max_duration_in_seconds 30.0 \ --min_duration_in_seconds 2.0 \ --max_text_length 400 \ --preprocessing_num_workers 2 \ --do_train true \ --num_train_epochs 8 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing true \ --per_device_train_batch_size 16 \ --learning_rate 0.00008 \ --adam_beta1 0.9 \ --adam_beta2 0.99 \ --weight_decay 0.01 \ --lr_scheduler_type "cosine" \ --warmup_steps 250 \ --logging_steps 2 \ --freeze_text_encoder true \ --audio_encoder_per_device_batch_size 4 \ --dtype "bfloat16" \ --seed 456 \ --output_dir "./parler-tts-mini-expresso" \ --temporary_save_to_disk "./audio_code_tmp" \ --save_to_disk "./tmp_dataset_audio" \ --dataloader_num_workers 4 \ --do_eval \ --predict_with_generate \ --include_inputs_for_metrics \ --group_by_length true ``` On a single 80GB A100 GPU, training took approximately 1.5 hours and returned a final evaluation loss of 4.0. Again, the script can be configured for multiple GPUs by running `accelerate config` from the command line; no further code-changes are required. Training performance is quite sensitive to learning rate and number of epochs: you should tune these according to your task and the size of your dataset. In our experiments, we found the best performance to occur after 8 epochs of training with a learning rate of 8e-5. If you followed to the end of these steps: congratulations! You should now have a fine-tuned model you can use for your downstream applications using the [inference code-example](#usage) above. You can try substituting your own dataset, or run training using a single-speaker dataset, like the [Jenny example](https://colab.research.google.com/github/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_on_a_single_speaker_dataset.ipynb). ## Motivation Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. Parler-TTS was released alongside: * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model. * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets. * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints. ## Citation If you found this repository useful, please consider citing this work and also the original Stability AI paper: ``` @misc{lacombe-etal-2024-parler-tts, author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, title = {Parler-TTS}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/parler-tts}} } ``` ``` @misc{lyth2024natural, title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, author={Dan Lyth and Simon King}, year={2024}, eprint={2402.01912}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## License This model is permissively licensed under the Apache 2.0 license.
mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF
mradermacher
"2024-06-15T08:29:06Z"
6,068
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "ja", "en", "base_model:nitky/Oumuamua-7b-instruct-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-15T05:51:24Z"
--- base_model: nitky/Oumuamua-7b-instruct-v2 language: - ja - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/nitky/Oumuamua-7b-instruct-v2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q2_K.gguf) | i1-Q2_K | 2.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-i1-GGUF/resolve/main/Oumuamua-7b-instruct-v2.i1-Q6_K.gguf) | i1-Q6_K | 6.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
yiyanghkust/finbert-fls
yiyanghkust
"2022-06-10T23:20:05Z"
6,057
21
transformers
[ "transformers", "pytorch", "bert", "text-classification", "financial-text-analysis", "forward-looking-statement", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-05-12T01:33:03Z"
--- language: "en" tags: - financial-text-analysis - forward-looking-statement widget: - text: "We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs. " --- Forward-looking statements (FLS) inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms. **Input**: A financial text. **Output**: Specific-FLS , Non-specific FLS, or Not-FLS. # How to use You can use this model with Transformers pipeline for forward-looking statement classification. ```python # tested in transformers==4.18.0 from transformers import BertTokenizer, BertForSequenceClassification, pipeline finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-fls',num_labels=3) tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-fls') nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer) results = nlp('We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.') print(results) # [{'label': 'Specific FLS', 'score': 0.77278733253479}] ``` Visit [FinBERT.AI](https://finbert.ai/) for more details on the recent development of FinBERT.
Efficient-Large-Model/VILA1.5-3b-s2
Efficient-Large-Model
"2024-05-03T14:31:50Z"
6,057
0
transformers
[ "transformers", "safetensors", "llava_llama", "VILA", "VLM", "text-generation", "arxiv:2312.07533", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-30T17:33:13Z"
--- license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-generation tags: - VILA - VLM --- # VILA Model Card ## Model details **Model type:** VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge. **Model date:** VILA1.5-3b-s2 was trained in May 2024. **Paper or resources for more information:** https://github.com/Efficient-Large-Model/VILA ``` @misc{lin2023vila, title={VILA: On Pre-training for Visual Language Models}, author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han}, year={2023}, eprint={2312.07533}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## License - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file. - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms: - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training. **Where to send questions or comments about the model:** https://github.com/Efficient-Large-Model/VILA/issues ## Intended use **Primary intended uses:** The primary use of VILA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset See [Dataset Preparation](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/README.md) for more details. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
Qwen/Qwen1.5-MoE-A2.7B-Chat
Qwen
"2024-04-30T07:45:42Z"
6,044
108
transformers
[ "transformers", "safetensors", "qwen2_moe", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-14T03:26:42Z"
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-MoE-A2.7B-Chat ## Introduction Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_moe'. ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-MoE-A2.7B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ correspondents, namely `Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. *
af1tang/personaGPT
af1tang
"2023-08-20T20:10:05Z"
6,043
116
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "arxiv:1801.07243", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- tags: - conversational license: gpl-3.0 --- ## A conversational agent with many personalities (PersonaGPT) PersonaGPT is an open-domain conversational agent designed to do 2 tasks: 1. decoding _personalized_ responses based on input personality facts (the "persona" profile of the bot). 2. incorporating _turn-level goals_ into its responses through "action codes" (e.g., "talk about work", "ask about favorite music"). It builds on the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) pretrained model based on the [GPT-2](https://github.com/openai/gpt-2) architecture. This model is trained on the [Persona-Chat](https://arxiv.org/pdf/1801.07243) dataset, with added special tokens to better distinguish between conversational history and personality traits for dyadic conversations. Furthermore, some active learning was used to train the model to do _controlled_ decoding using turn-level goals. ## Full Repo Preprocessing, training and implementation details can be found in the [personaGPT repo](https://github.com/af1tang/personaGPT). ### How to Use 1. Load the model and define some helper functions. ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch tokenizer = GPT2Tokenizer.from_pretrained("af1tang/personaGPT") model = GPT2LMHeadModel.from_pretrained("af1tang/personaGPT") if torch.cuda.is_available(): model = model.cuda() ## utility functions ## flatten = lambda l: [item for sublist in l for item in sublist] def to_data(x): if torch.cuda.is_available(): x = x.cpu() return x.data.numpy() def to_var(x): if not torch.is_tensor(x): x = torch.Tensor(x) if torch.cuda.is_available(): x = x.cuda() return x def display_dialog_history(dialog_hx): for j, line in enumerate(dialog_hx): msg = tokenizer.decode(line) if j %2 == 0: print(">> User: "+ msg) else: print("Bot: "+msg) print() def generate_next(bot_input_ids, do_sample=True, top_k=10, top_p=.92, max_length=1000, pad_token=tokenizer.eos_token_id): full_msg = model.generate(bot_input_ids, do_sample=True, top_k=top_k, top_p=top_p, max_length=max_length, pad_token_id=tokenizer.eos_token_id) msg = to_data(full_msg.detach()[0])[bot_input_ids.shape[-1]:] return msg ``` 2. Give your chatbot partner a set of personalities. ```python # get personality facts for conversation personas = [] for i in range(3): response = input(">> Fact %d: "%(i+1))+ tokenizer.eos_token personas.append(response) personas = tokenizer.encode(''.join(['<|p2|>'] + personas + ['<|sep|>'] + ['<|start|>'])) ``` 3. The first use of PersonaGPT is to do _personalized_ dialog generation. Use the following loop to interact with the model. ```python # converse for 8 turns dialog_hx = [] for step in range(8): # encode the user input user_inp = tokenizer.encode(input(">> User: ") + tokenizer.eos_token) # append to the chat history dialog_hx.append(user_inp) # generated a response while limiting the total chat history to 1000 tokens, bot_input_ids = to_var([personas + flatten(dialog_hx)]).long() msg = generate_next(bot_input_ids) dialog_hx.append(msg) print("Bot: {}".format(tokenizer.decode(msg, skip_special_tokens=True))) ``` Example of personalized decoding: | | Persona Facts | |---|---| | 1 | i am 64 years old but feel quite young. | | 2 | i try to eat healthy but limit mcdonalds to once a week. | | 3 | i regret working as a doctor for the last 20 years. | | 4 | my secret hobby is making self-help youtube videos. | | 5 | i've been to spain many times and wish i could go again. | | Role | Response| |---|---| | Human | yo whats good fam? | | Bot | hello, how are you doing? | | Human | not bad what you doing today?| | Bot | eating a lot healthier now that i'm in my 60s | | Human | i'm more of a fast food guy | | Bot | me too but i'm very healthy so i try to avoid it | | Human | what do you do for fun? | | Bot | i make videos for fun and eat very healthy | | Human | nice what kind of videos? | | Bot | i love making videos about self help stuff | | Human | i think i need some help | | Bot | what kind of help would you like to get | | Human | i'm 21 and feel lost in life | | Bot | i can relate, i'm only 65 | 4. The second use of PersonaGPT is to do _controlled response generation_. Use the following program to interact with the model. ```python ## available actions ## action_space = [ 'ask about kids.', "ask about pets.", 'talk about work.', 'ask about marital status.', 'talk about travel.', 'ask about age and gender.', 'ask about hobbies.', 'ask about favorite food.', 'talk about movies.', 'talk about music.', 'talk about politics.'] # converse for 8 turns dialog_hx = [] for step in range(8): # choose an action act = None while act not in action_space: display_dialog_history(dialog_hx) print() print(" actions: ") for k,v in enumerate(action_space): print(k,v) try: act = action_space[int(input(" input [0-10]: " ))] except: act = None print() # format into prefix code action_prefix = tokenizer.encode(''.join(['<|act|> '] + [act] + ['<|p1|>'] + [] + ['<|sep|>'] + ['<|start|>'])) bot_input_ids = to_var([action_prefix + flatten(dialog_hx)]).long() # generate query conditioned on action msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length) dialog_hx.append(msg) # generate bot response bot_input_ids = to_var([personas+ flatten(dialog_hx)]).long() msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length) dialog_hx.append(msg) display_dialog_history(dialog_hx) ``` Example of controlled response generation: | | Bot 2's Persona Facts | |---|---| | 1 | i am a quiet engineer. | | 2 | i'm single and am looking for love. | | 3 | sadly, i don't have any relatable hobbies. | | 4 | luckily, however, i am tall and athletic. | | 5 | on friday nights, i watch re-runs of the simpsons alone. | | Role | Action | Response | |---|---|---| | Bot 1 | talk about movies | hello do you like movies? | | Bot 2 | - | i love the simpsons. how about you? | | Bot 1 | talk about music | i like the walking dead. do you play an instrument? | | Bot 2 | - | yes i play the violin. | | Bot 1 | ask about marital status | are you married or single? | | Bot 2 | - | i am single but i am looking for love. | | Bot 1 | talk about work | my job is working in a factory. what's your occupation? | | Bot 2 | - | engineer. i'm very quiet so no one hears me. | | Bot 1 | talk about hobbies | do you have any hobbies? | | Bot 2 | - | i watch reruns of the simpsons. | | Bot 1 | ask about favorite food | what's your favorite food? | | Bot 2 | - | i love pizza. how about yourself? | | Bot 1 | ask about pets | i also love pizza. do you like animals? | | Bot 2 | - | i have two dogs. what is your occupation? | | Bot 1 | talk about work | i'm a factory worker. what's your dream job? | | Bot 2 | - | i'd love to be a writer one day. |
SolidSnacke/L3-8B-Lunaris-v1-i-GGUF
SolidSnacke
"2024-07-01T21:58:20Z"
6,043
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "text-generation", "en", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-01T18:19:20Z"
--- license: llama3 language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - text-generation-inference --- Link to original model and script: - Sao10K/L3-8B-Lunaris-v1: https://huggingface.co/Sao10K/L3-8B-Lunaris-v1 - FantasiaFoundry/GGUF-Quantization-Script: https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script
h2oai/h2o-danube-1.8b-base
h2oai
"2024-05-02T18:34:51Z"
6,041
43
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "gpt", "llm", "large language model", "en", "arxiv:2401.16818", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-23T11:25:25Z"
--- language: - en library_name: transformers license: apache-2.0 tags: - gpt - llm - large language model --- ## Summary h2o-danube-1.8b-base is a foundation model trained by H2O.ai with 1.8 billion parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We release three versions of this model: | Model Name | Description | |:-----------------------------------------------------------------------------------|:----------------| | [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base) | Base model | | [h2oai/h2o-danube-1.8b-sft](https://huggingface.co/h2oai/h2o-danube-1.8b-sft) | SFT tuned | | [h2oai/h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat) | SFT + DPO tuned | ## Model Architecture We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096. The details of the model architecture are: | Hyperparameter | Value | |:----------------|:-------| | n_layers | 24 | | n_heads | 32 | | n_query_groups | 8 | | n_embd | 2560 | | vocab size | 32000 | | sequence length | 16384 | ## Usage This is a pre-trained foundation model. For your task, you will likely want to perform application specific fine-tuning. We also offer a chat fine-tuned version: [h2oai/h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat). To use the model with the transformers library on a machine with GPUs, first make sure you have the transformers library installed. ```python # pip install transformers==4.37.0 import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("h2oai/h2o-danube-1.8b-base") model = AutoModelForCausalLM.from_pretrained( "h2oai/h2o-danube-1.8b-base", torch_dtype=torch.bfloat16, ) model.cuda() inputs = tokenizer("The Danube is the second longest river in Europe", return_tensors="pt").to(model.device) res = model.generate( **inputs, max_new_tokens=38, do_sample=False, ) print(tokenizer.decode(res[0], skip_special_tokens=True)) ``` ## Benchmarks Commonsense, world-knowledge and reading comprehension tested in 0-shot: | Benchmark | acc_n | |:--------------|:--------:| | ARC-easy | 62.29 | | ARC-challenge | 35.84 | | BoolQ | 65.81 | | Hellaswag | 68.20 | | OpenBookQA | 37.60 | | PiQA | 76.93 | | TriviaQA | 38.99 | | Winogrande | 61.96 | ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.