modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
Niggendar/sakurasakumaMixPony_v10
Niggendar
"2024-06-18T15:41:42Z"
1,562
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-18T11:54:49Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NikolayKozloff/Viking-7B-Q5_K_M-GGUF
NikolayKozloff
"2024-06-29T18:53:19Z"
1,562
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "fi", "en", "da", "sv", "no", "nn", "is", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:mc4", "base_model:LumiOpen/Viking-7B", "license:apache-2.0", "region:us" ]
null
"2024-06-29T18:52:52Z"
--- base_model: LumiOpen/Viking-7B datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - mc4 language: - fi - en - da - sv - 'no' - nn - is license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Viking-7B-Q5_K_M-GGUF This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_K_M-GGUF --hf-file viking-7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_K_M-GGUF --hf-file viking-7b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_K_M-GGUF --hf-file viking-7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_K_M-GGUF --hf-file viking-7b-q5_k_m.gguf -c 2048 ```
raquelsilveira/legalbertpt_fp
raquelsilveira
"2024-03-19T13:06:48Z"
1,560
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-05-02T20:39:50Z"
--- license: openrail --- # LegalBert-pt ## Introduction Legalbert-pt is a language model for the legal domain in the Portuguese language. The model was pre-trained to acquire specialization for the domain, and later it could be adjusted for use in specific tasks. Two versions of the model were created: one as a complement to the BERTimbau model, and the other from scratch. The effectiveness of the model based on BERTimbau was evident when analyzing the perplexity measure of the models. Experiments were also carried out in the tasks of identifying legal entities and classifying legal petitions. The results show that the use of specific language models outperforms those obtained using the generic language model in all tasks, suggesting that the specialization of the language model for the legal domain is an important factor for improving the accuracy of learning algorithms. Keywords: Language model, Legal Bert pt br, Legal domain, Portuguese Language Model ## Available models |Model|Initial model|#Layers|#Params| |-|-|-|-| |LegalBert-pt SC| |12|110M| |LegalBert-pt FP| neuralmind/bert-base-portuguese-cased | 12 | 110M | ## Dataset To pretrain various versions of the LegalBert-pt language model, we collected a total of 1.5 million legal documents in Portuguese from ten Brazilian courts. These documents consisted of four types: initial petitions, petitions, decisions, and sentences. Table shows the distribution of these documents. The data were obtained from the Codex system of the Brazilian National Council of Justice (CNJ), which maintains the largest and most diverse set of legal texts in Brazilian Portuguese. As part of an agreement established with the researchers who authored this article, the CNJ provided these data for our research. |Data source|Number of documents|%| |-|-|-| |Court of Justice of the State of Ceará|80,504|5.37\%| |Court of Justice of the State of Piauí|90,514|6.03| |Court of Justice of the State of Rio de Janeiro|33,320|2.22| |Court of Justice of the State of Rondônia|971,615|64.77| |Federal Regional Court of the 3rd Region|70,196|4.68| |Federal Regional Court of the 5th Region|6,767|0.45| |Regional Labor Court of the 9th Region|16,133|1.08| |Regional Labor Court of the 11th Region|5,351|0.36| |Regional Labor Court of the 13th Region|155,567|10.37| |Regional Labor Court of the 23th Region|70,033|4.67| |Total|1,500,000|100.00\% | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('raquelsilveira/legalbertpt_fp') tokenizer = AutoTokenizer.from_pretrained('raquelsilveira/legalbertpt_fp') ``` ## Cite as Raquel Silveira, Caio Ponte, Vitor Almeida, Vládia Pinheiro, and Vasco Furtado. 2023. LegalBert-pt: A Pretrained Language Model for the Brazilian Portuguese Legal Domain. In Intelligent Systems: 12th Brazilian Conference, BRACIS 2023, Belo Horizonte, Brazil, September 25–29, 2023, Proceedings, Part III. Springer-Verlag, Berlin, Heidelberg, 268–282. https://doi.org/10.1007/978-3-031-45392-2_18
facebook/convnext-base-224
facebook
"2023-06-13T19:40:09Z"
1,559
7
transformers
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
line-corporation/japanese-large-lm-3.6b
line-corporation
"2023-08-17T01:06:17Z"
1,559
75
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "ja", "dataset:wikipedia", "dataset:mc4", "dataset:cc100", "dataset:oscar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-21T00:48:05Z"
--- license: apache-2.0 datasets: - wikipedia - mc4 - cc100 - oscar language: - ja --- # japanese-large-lm-3.6b This repository provides a 3.6B parameters Japanese language model, trained by [LINE Corporation](https://linecorp.com/ja/). [Tech Blog](https://engineering.linecorp.com/ja/blog/3.6-billion-parameter-japanese-language-model) explains details. ## How to use ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b", torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b", use_fast=False) generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0) set_seed(101) text = generator( "おはようございます、今日の天気は", max_length=30, do_sample=True, pad_token_id=tokenizer.pad_token_id, num_return_sequences=5, ) for t in text: print(t) # 下記は生成される出力の例 # [{'generated_text': 'おはようございます、今日の天気は雨模様ですね。梅雨のこの時期の 朝は洗濯物が乾きにくいなど、主婦にとっては悩みどころですね。 では、'}, # {'generated_text': 'おはようございます、今日の天気は晴れ。 気温は8°C位です。 朝晩は結構冷え込むようになりました。 寒くなってくると、...'}, # {'generated_text': 'おはようございます、今日の天気は曇りです。 朝起きたら雪が軽く積もっていた。 寒さもそれほどでもありません。 日中は晴れるみたいですね。'}, # {'generated_text': 'おはようございます、今日の天気は☁のち☀です。 朝の気温5°C、日中も21°Cと 暖かい予報です'}, # {'generated_text': 'おはようございます、今日の天気は晴天ですが涼しい1日です、気温は午後になり低くなり25°Cくらい、風も強いようですので、'}] ``` ## Model architecture | Model | Vocab size | Architecture | Position type | Layers | Hidden dim | Attention heads | | :---: | :--------: | :----------- | :-----------: | :----: | :--------: | :-------------: | | 1.7B | 51200 | GPT2 | Absolute | 24 | 2304 | 24 | | 3.6B | 51200 | GPTNeoX | RoPE | 30 | 3072 | 32 | ## Training Corpus Our training corpus consists of the Japanese portions of publicly available corpus such as C4, CC-100, and Oscar. We also incorporated the Web texts crawled by in-house system. The total size of our training corpus is about 650 GB. The trained model achieves 7.50 perplexity on the internal validation sets of Japanese C4. ## Tokenization We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We **do not** apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer. ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
kroonen/phi-2-GGUF
kroonen
"2024-01-06T15:20:51Z"
1,559
23
null
[ "gguf", "nlp", "code", "text-generation", "en", "license:mit", "region:us" ]
text-generation
"2023-12-16T02:06:58Z"
--- inference: false license: mit license_name: microsoft-research-license license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code --- This is a quantized GGUF version of Microsoft Phi-2 to 4_0, 8_0 bits and the converted 16 FP model. (link to the original model : https://huggingface.co/microsoft/phi-2) *Disclamer* : make sure to have the latest version of llama.cpp after commit b9e74f9bca5fdf7d0a22ed25e7a9626335fdfa48
qwp4w3hyb/Phi-3-medium-128k-instruct-iMat-GGUF
qwp4w3hyb
"2024-05-22T09:27:39Z"
1,558
4
null
[ "gguf", "nlp", "code", "microsoft", "phi", "instruct", "finetune", "imatrix", "importance matrix", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "license:mit", "region:us" ]
text-generation
"2024-05-21T18:05:39Z"
--- license: mit license_link: >- https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation base_model: microsoft/Phi-3-medium-128k-instruct tags: - nlp - code - microsoft - phi - instruct - finetune - gguf - imatrix - importance matrix --- # Quant Infos - Requires latest llama.cpp master; - quants done with an importance matrix for improved quantization loss - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [201cc11afa0a1950e1f632390b2ac6c937a0d8f0](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0) - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset. ``` ./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat ``` # Original Model Card: ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
monologg/biobert_v1.1_pubmed
monologg
"2023-06-12T12:30:46Z"
1,557
7
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
Entry not found
SakuraLLM/Sakura-7B-LNovel-v0.9-GGUF
SakuraLLM
"2024-06-26T14:35:42Z"
1,557
3
null
[ "gguf", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-01-15T08:55:33Z"
--- license: cc-by-nc-sa-4.0 ---
StanfordAIMI/GREEN-RadLlama2-7b
StanfordAIMI
"2024-05-16T23:52:45Z"
1,557
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:StanfordAIMI/RadLLaMA-7b", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-08T18:38:30Z"
--- license: llama2 base_model: StanfordAIMI/RadLLaMA-7b tags: - generated_from_trainer model-index: - name: GREEN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # StanfordAIMI/GREEN This model is a fine-tuned version of [StanfordAIMI/RadLLaMA-7b](https://huggingface.co/StanfordAIMI/RadLLaMA-7b). It achieves the following results on the evaluation set: - Loss: 0.0644 ## Model description and Training procedure Please see the project website at https://stanford-aimi.github.io/green.html. ## Intended uses & limitations This model is finetuned to evaluate the difference between the reference and candidate radiology report for Chest Xrays. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 32 - total_train_batch_size: 2048 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2634 | 0.64 | 25 | 0.2924 | | 0.1216 | 1.28 | 50 | 0.0898 | | 0.0833 | 1.92 | 75 | 0.0718 | | 0.062 | 2.56 | 100 | 0.0644 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
CompendiumLabs/bge-small-en-v1.5-gguf
CompendiumLabs
"2024-02-17T21:48:37Z"
1,557
0
null
[ "gguf", "license:mit", "region:us" ]
null
"2024-02-17T21:43:14Z"
--- license: mit --- <img src="https://raw.githubusercontent.com/CompendiumLabs/compendiumlabs.ai/main/images/logo_text_crop.png" alt="Compendium Labs" style="width: 500px;"> # bge-small-en-v1.5-gguf Source model: https://huggingface.co/BAAI/bge-small-en-v1.5 Quantized and unquantized embedding models in GGUF format for use with `llama.cpp`. A large benefit over `transformers` is almost guaranteed and the benefit over ONNX will vary based on the application, but this seems to provide a large speedup on CPU and a modest speedup on GPU for larger models. Due to the relatively small size of these models, quantization will not provide huge benefits, but it does generate up to a 30% speedup on CPU with minimal loss in accuracy. <br/> # Files Available <div style="width: 500px; margin: 0;"> | Filename | Quantization | Size | |:-------- | ------------ | ---- | | [bge-small-en-v1.5-f32.gguf](https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/blob/main/bge-small-en-v1.5-f32.gguf) | F32 | 128 MB | | [bge-small-en-v1.5-f16.gguf](https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/blob/main/bge-small-en-v1.5-f16.gguf) | F16 | 65 MB | | [bge-small-en-v1.5-q8_0.gguf](https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/blob/main/bge-small-en-v1.5-q8_0.gguf) | Q8_0 | 36 MB | | [bge-small-en-v1.5-q4_k_m.gguf](https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/blob/main/bge-small-en-v1.5-q4_k_m.gguf) | Q4_K_M | 24 MB | </div> <br/> # Usage These model files can be used with pure `llama.cpp` or with the `llama-cpp-python` Python bindings ```python from llama_cpp import Llama model = Llama(gguf_path, embedding=True) embed = model.embed(texts) ``` Here `texts` can either be a string or a list of strings, and the return value is a list of embedding vectors. The inputs are grouped into batches automatically for efficient execution. There is also LangChain integration through `langchain_community.embeddings.LlamaCppEmbeddings`.
NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF
NikolayKozloff
"2024-06-24T13:28:33Z"
1,557
1
transformers
[ "transformers", "gguf", "gpt", "llm", "large language model", "h2o-llmstudio", "llama-cpp", "gguf-my-repo", "text-generation", "en", "ja", "dataset:fujiki/japanese_hh-rlhf-49k", "base_model:haqishen/h2o-Llama-3-8B-Japanese-Instruct", "license:llama3", "region:us" ]
text-generation
"2024-06-24T13:27:54Z"
--- base_model: haqishen/h2o-Llama-3-8B-Japanese-Instruct datasets: - fujiki/japanese_hh-rlhf-49k language: - en - ja library_name: transformers license: llama3 pipeline_tag: text-generation tags: - gpt - llm - large language model - h2o-llmstudio - llama-cpp - gguf-my-repo inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`haqishen/h2o-Llama-3-8B-Japanese-Instruct`](https://huggingface.co/haqishen/h2o-Llama-3-8B-Japanese-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/haqishen/h2o-Llama-3-8B-Japanese-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF --hf-file h2o-llama-3-8b-japanese-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF --hf-file h2o-llama-3-8b-japanese-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF --hf-file h2o-llama-3-8b-japanese-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/h2o-Llama-3-8B-Japanese-Instruct-Q8_0-GGUF --hf-file h2o-llama-3-8b-japanese-instruct-q8_0.gguf -c 2048 ```
mehdisebai/text-to-rule_Mistral_2_merged-GGUF
mehdisebai
"2024-06-28T09:30:01Z"
1,557
0
null
[ "gguf", "region:us" ]
null
"2024-06-28T09:23:12Z"
Entry not found
timm/ViT-L-16-SigLIP-384
timm
"2023-10-25T21:54:17Z"
1,556
7
open_clip
[ "open_clip", "safetensors", "clip", "siglip", "zero-shot-image-classification", "dataset:webli", "arxiv:2303.15343", "license:apache-2.0", "region:us" ]
zero-shot-image-classification
"2023-10-16T23:32:50Z"
--- tags: - clip - siglip library_name: open_clip pipeline_tag: zero-shot-image-classification license: apache-2.0 datasets: - webli --- # Model card for ViT-L-16-SigLIP-384 A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI. This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only). ## Model Details - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification. - **Original:** https://github.com/google-research/big_vision - **Dataset:** WebLI - **Papers:** - Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343 ## Model Usage ### With OpenCLIP ``` import torch import torch.nn.functional as F from urllib.request import urlopen from PIL import Image from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8 model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP-384') tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP-384') image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) image = preprocess(image).unsqueeze(0) labels_list = ["a dog", "a cat", "a donut", "a beignet"] text = tokenizer(labels_list, context_length=model.context_length) with torch.no_grad(), torch.cuda.amp.autocast(): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features = F.normalize(image_features, dim=-1) text_features = F.normalize(text_features, dim=-1) text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias) zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]])) print("Label probabilities: ", zipped_list) ``` ### With `timm` (for image embeddings) ```python from urllib.request import urlopen from PIL import Image import timm image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_siglip_384', pretrained=True, num_classes=0, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor ``` ## Citation ```bibtex @article{zhai2023sigmoid, title={Sigmoid loss for language image pre-training}, author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas}, journal={arXiv preprint arXiv:2303.15343}, year={2023} } ``` ```bibtex @misc{big_vision, author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander}, title = {Big Vision}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/google-research/big_vision}} } ```
stanford-oval/Llama-2-7b-WikiChat-fused
stanford-oval
"2024-01-14T06:45:06Z"
1,556
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2305.14292", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-09T02:03:54Z"
--- license: llama2 language: - en --- This model is a fine-tuned LLaMA-2 (7B) model. Please accept the [LLaMA-2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) before downloading this model. Refer to the following for more information: GitHub repository: https://github.com/stanford-oval/WikiChat Paper: https://aclanthology.org/2023.findings-emnlp.157/ <p align="center"> <img src="./images/wikipedia.png" width="100px" alt="Wikipedia" /> <h1 align="center"> <b>WikiChat</b> <br> <a href="https://arxiv.org/abs/2305.14292"> <img src="https://img.shields.io/badge/cs.CL-2305.14292-b31b1b" alt="arXiv"> </a> <a href="https://github.com/stanford-oval/WikiChat/stargazers"> <img src="https://img.shields.io/github/stars/stanford-oval/WikiChat?style=social" alt="Github Stars"> </a> </h1> </p> <p align="center"> Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia </p> <p align="center"> Online demo: <a href="https://wikichat.genie.stanford.edu" target="_blank"> https://wikichat.genie.stanford.edu </a> <br> </p> <p align="center"> <img src="./images/pipeline.svg" width="700px" alt="WikiChat Pipeline" /> </p>
Salesforce/codegen2-7B_P
Salesforce
"2023-07-06T10:48:47Z"
1,555
25
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "custom_code", "arxiv:2305.02309", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-04-26T16:04:49Z"
--- license: apache-2.0 --- # CodeGen2 (CodeGen2-7B) ## Model description [CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper: [CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou. Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages. Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality. ### Causal sampling For regular causal sampling, simply generate completions given the context: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ### Infill sampling For **infill** sampling, we introduce three new special token types: * `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill. * `<sep>`: Separator token between the suffix and the infilled sample. See below. * `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output. For example, if we want to generate infill for the following cursor position of a function: ```python def hello_world(): | return name ``` we construct an input to the model by 1. Inserting `<mask_1>` token in place of cursor position 2. Append `<sep>` token to indicate the boundary 3. Insert another `<mask_1>` to indicate which mask we want to infill. The final snippet looks as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main") def format(prefix, suffix): return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>" prefix = "def hello_world():\n " suffix = " return name" text = format(prefix, suffix) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):]) ``` You might want to truncate the model output with `<eom>`. ## Training data This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows: `c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`. ## Training procedure CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption. Please refer to the paper for more details. ## Evaluation results We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details. ## Intended use and limitations As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## BibTeX entry and citation info ```bibtex @article{Nijkamp2023codegen2, title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages}, author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo}, journal={arXiv preprint}, year={2023} } ```
OzzyGT/sdxl-ip-adapter
OzzyGT
"2024-03-21T17:12:02Z"
1,555
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:apache-2.0", "region:us" ]
text-to-image
"2024-01-04T19:10:16Z"
--- tags: - text-to-image - stable-diffusion license: apache-2.0 library_name: diffusers --- # IP-Adapter for SDXL This is just a clone from the [original repository](https://huggingface.co/h94/IP-Adapter) with just the SDXL vit-h models and the corresponding image encoder. Update: Added IP Adapter for Composition to be able to use multi ip adapters (they need to be in the same repository). Original model: https://huggingface.co/ostris/ip-composition-adapter Made to be used with [Image Artisan XL](https://github.com/ZCode-opensource/image-artisan-xl). Here's some experiments: prompt: `cinematic portrait photo of a woman, against a white background, 4k, highly detailed` ### ip-adapter_sdxl_vit-h |source|0% noise|25% noise|50% noise|85% noise|100% noise| |---|---|---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ULDAldMoh5g8QqVjJ8wD3.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ywPA2CVKl3K9v7Q-FZmsM.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/8kAfWliJdkHcVGQ2W41aj.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/upoQFe-sVNc8l-chpoaBe.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/_l1HBwFKkwPsba8P7K3wp.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/HvachL6ycm9ZFycWlAUEn.png)| ### ip-adapter-plus_sdxl_vit-h |source|0% noise|25% noise|50% noise|85% noise|100% noise| |---|---|---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ULDAldMoh5g8QqVjJ8wD3.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/a2AnuqOKHHTKT0w07Iypd.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/hqC11AXCmaRPYM6dEeHUu.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ql2MArHyYFAOG32T53WBK.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/glvJnfnX3pshwnCIYXRAB.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/z1TItYCQavClCmmImqZ5x.png)| ### ip-adapter-plus-face_sdxl_vit-h prompt: `cinematic portrait photo of a woman, against a white background, half body shot, closeup, 4k, highly detailed` |source|0% noise|25% noise|50% noise|85% noise|100% noise| |---|---|---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ULDAldMoh5g8QqVjJ8wD3.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/umdUi6Al90-7y_NaYb2yJ.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/IdGNZqAzRXbET9mV4ppgf.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/EJAOi-fyrDNLPXc9LEURd.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/vEgdBZdNfxreuzPn6nSSc.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/y39krJCkd2rGRCbJvWWkM.png)| ## IP Adapter with multiple images - Instant Lora ### ip-adapter_sdxl_vit-h |w1|w2|w3|w4|w5|bg| |---|---|---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/QftBOflSljQGlV3FXwo07.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/8e1Z0mWXYb0ocCARQpfDd.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/c-nRn6nl4hh0yMNjNJdEd.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/rQkByw6yU9ofij2TH0heR.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/GLBNsW1cd8JezM-jKShN4.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/M9oshb8powkDDEpRa8bbY.png)| |w1(1.0) + w2(1.0)|w1(0.5) + w2(1.0)|w1 (1.0) + w2(0.5)|w1(0.5) + w2(0.5)|w1(1.0) + w2(1.0) + bg(1.0)|w1(0.5) + w2(0.5) + bg(1.0)|w1(0.5) + w2(0.5) + bg(0.5)| |---|---|---|---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/1cX65n7hwXnhMfpwaKceU.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/nCV7zBmDY66XZDM4aVS0U.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/gsDVMMa3k1JbHVzzep4Jv.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/UC9hwmUPzcB9LNT_DIFyw.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/7k-R8jvNdYyEvU4WiVURB.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/R6k15uacJJXuD0-Mk9LO5.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/wZjvlCg00RLjUJ7ZXfbB-.png)| |w1+w2+w3|w1+w2+w3+w4|w1+w2+w3+w4+w5|w1+w2+w3+w4+w5+bg| |---|---|---|---| |![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/GkF_oBRiC7yDvIa7Ggsqo.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Hr_1oWzxBA9PwK1CxZgS5.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/OcLRB-kIi5A4q1SjXWoOK.png)|![image/png](https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/ph33zaC17sLRpsAgcpyMT.png)|
Deniskin/gpt3_medium
Deniskin
"2021-05-21T09:41:39Z"
1,554
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
Entry not found
llmware/industry-bert-sec-v0.1
llmware
"2024-05-14T20:56:14Z"
1,554
8
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "license:apache-2.0", "region:us" ]
feature-extraction
"2023-09-29T21:44:06Z"
--- license: apache-2.0 inference: false --- # industry-bert-sec-v0.1 <!-- Provide a quick summary of what the model is/does. --> industry-bert-sec-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models. ### Model Description <!-- Provide a longer summary of what this model is. --> industry-bert-sec-v0.1 is a domain fine-tuned BERT-based 768-parameter Sentence Transformer model, intended to as a "drop-in" substitute for embeddings in financial and regulatory domains. This model was trained on a wide range of publicly available U.S. Securities and Exchange Commission (SEC) regulatory filings and related documents. - **Developed by:** llmware - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below. ## Model Use from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-sec-v0.1") model = AutoModel.from_pretrained("llmware/industry-bert-sec-v0.1") ## Bias, Risks, and Limitations This is a semantic embedding model, fine-tuned on public domain SEC filings and regulatory documents. Results may vary if used outside of this domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have put in place for safety or mitigate potential bias in the dataset. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> This model was fine-tuned using a custom self-supervised procedure and custom dataset that combined contrastive techniques with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson). ## Citation [optional] Custom self-supervised training protocol used to train the model, which was derived and inspired by the following papers: @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } @inproceedings{giorgi-etal-2021-declutr, title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations}, author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary}, year = 2021, month = aug, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)}, publisher = {Association for Computational Linguistics}, address = {Online}, pages = {879--895}, doi = {10.18653/v1/2021.acl-long.72}, url = {https://aclanthology.org/2021.acl-long.72} } @article{Carlsson-2021-CT, title = {Semantic Re-tuning with Contrastive Tension}, author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren}, year= {2021}, month= {"January"} Published: 12 Jan 2021, Last Modified: 05 May 2023 } <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ## Model Card Contact Darren Oberst @ llmware
universitytehran/PersianMind-v1.0
universitytehran
"2024-05-09T11:57:45Z"
1,554
30
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "multilingual", "fa", "en", "arxiv:2401.06466", "license:cc-by-nc-sa-4.0", "co2_eq_emissions", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-03T05:27:59Z"
--- license: cc-by-nc-sa-4.0 language: - multilingual - fa - en library_name: transformers tags: - text-generation-inference inference: false metrics: - bleu - comet - accuracy - perplexity - spearmanr pipeline_tag: text-generation co2_eq_emissions: emissions: 232380 --- <p align="center"> <img src="PersianMind.jpg" alt="PersianMind logo" width=200/> </p> # <span style="font-variant:small-caps;">PersianMind</span> <span style="font-variant:small-caps;">PersianMind</span> is a cross-lingual Persian-English large language model. The model achieves state-of-the-art results on Persian subset of the [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) benchmark and the [ParsiNLU multiple-choice QA](https://github.com/persiannlp/parsinlu) task. It also attains performance comparable to GPT-3.5-turbo in a Persian reading comprehension task. ## Model Description - **Developed by:** [Pedram Rostami](mailto:[email protected]), [Ali Salemi](mailto:[email protected]), and [Mohammad Javad Dousti](mailto:[email protected]) - **Model type:** Language model - **Languages:** English and Persian - **License:** [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) (non-commercial use only.) ## How to Get Started with the Model Use the code below to get started with the model. Note that you need to install <code><b>sentencepiece</b></code> and <code><b>accelerate</b></code> libraries along with <code><b>PyTorch</b></code> and <code><b>🤗Transformers</b></code> to run this code. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch device = "cuda" if torch.cuda.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map={"": device}, ) tokenizer = AutoTokenizer.from_pretrained( "universitytehran/PersianMind-v1.0", ) TEMPLATE = "{context}\nYou: {prompt}\nPersianMind: " CONTEXT = "This is a conversation with PersianMind. It is an artificial intelligence model designed by a team of " \ "NLP experts at the University of Tehran to help you with various tasks such as answering questions, " \ "providing recommendations, and helping with decision making. You can ask it anything you want and " \ "it will do its best to give you accurate and relevant information." PROMPT = "در مورد هوش مصنوعی توضیح بده." model_input = TEMPLATE.format(context=CONTEXT, prompt=PROMPT) input_tokens = tokenizer(model_input, return_tensors="pt") input_tokens = input_tokens.to(device) generate_ids = model.generate(**input_tokens, max_new_tokens=512, do_sample=False, repetition_penalty=1.1) model_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(model_output[len(model_input):]) ``` ### How to Quantize the Model Quantized models can be run on resource-constrained devices. To quantize the model, you should install the <code><b>bitsandbytes</b></code> library. In order to quantize the model in 8-bit (`INT8`), use the code below. ```python model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", device_map="auto", low_cpu_mem_usage=True, load_in_8bit=True ) ``` Alternatively, you can quantize the model in 4-bit (`NormalFloat4`) with the following code. ```python from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", ) model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", quantization_config=quantization_config, device_map="auto" ) ``` ### Evaluating Quantized Models | Model | <span style="font-variant:small-caps;">Belebele</span> (Persian) | Fa→En Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | En→Fa Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | Model Size | Tokens/sec | | :----------------------------------------------------------------: | :--------------------------------------------------------------: | :------------------------------------------------------------------------: | :------------------------------------------------------------------------: | :--------: | :--------: | | <span style="font-variant:small-caps;">PersianMind</span> (`BF16`) | 73.9 | 83.61 | 79.44 | 13.7G | 25.35 | | <span style="font-variant:small-caps;">PersianMind</span> (`INT8`) | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 | | <span style="font-variant:small-caps;">PersianMind</span> (`NF4`) | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 | We evaluated quantized models in various tasks against the original model. Specifically, we evaluated all models using the reading comprehension multiple-choice question-answering benchmark of [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) (Persian subset) and reported the accuracy of each model. Additionally, we evaluated our models for Persian-to-English and English-to-Persian translation tasks. For this, we utilized the Persian-English subset of the [<span style="font-variant:small-caps;">Flores</span>-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset and reported our results using the <span style="font-variant:small-caps;">Comet</span> metric. Furthermore, we calculated the average number of generated tokens per second by each model during running the translation tasks. To understand resource efficiency, we measured the memory usage of each model by employing the `get_memory_footprint()` function. ## License <span style="font-variant:small-caps;">PersianMind</span> is subject to Meta's [LLaMa2 Community License](https://raw.githubusercontent.com/facebookresearch/llama/main/LICENSE). It is further licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/), which allows non-commercial use of the model. Commercial use of this model requires written agreement which must be obtained from the copyright holders who are listed as developers in this page. If you suspect any violations, please reach out to us. ## Citation If you find this model helpful, please ensure to cite the following paper. **BibTeX:** ```bibtex @misc{persianmind, title={{PersianMind: A Cross-Lingual Persian-English Large Language Model}}, author={Rostami, Pedram and Salemi, Ali and Dousti, Mohammad Javad}, year={2024} eprint={2401.06466}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wisdomik/Quilt-Llava-v1.5-7b
wisdomik
"2024-02-26T23:38:48Z"
1,554
3
transformers
[ "transformers", "pytorch", "llava", "text-generation", "medical", "histopathology", "arxiv:2312.04746", "dataset:wisdomik/QUILT-LLaVA-Instruct-107K", "dataset:wisdomik/Quilt_VQA", "dataset:wisdomik/QuiltVQA_RED", "license:cc-by-nc-3.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-02-02T18:24:51Z"
--- license: cc-by-nc-3.0 inference: false datasets: - wisdomik/QUILT-LLaVA-Instruct-107K - wisdomik/Quilt_VQA - wisdomik/QuiltVQA_RED pipeline_tag: text-generation tags: - medical - histopathology - arxiv:2312.04746 extra_gated_prompt: >- Please read and agree to the following terms: 1. The requester details provided are not faked. 2. The model will not be used for commercial/clinical purposes and will be used for the purpose of scientific research only. 3. The data will not be re-distributed, published, copied, or further disseminated in any way or form whatsoever, whether for profit or not. 4. The right study/paper (Quilt-1M(https://quilt1m.github.io/) and Quilt-LLaVa (https://quilt-llava.github.io) papers) will be cited in any publication(s) that uses this model/data extra_gated_fields: Email: text First and last name: text Affiliation: text Type of Affiliation: type: select options: - Academia - Industry - Other I want to use this model for: type: select options: - Research - Education - label: Other value: other I agree to the aforementioned terms of use: checkbox --- <br> <br> <p align="center"> <img src="https://quilt-llava.github.io/static/images/teaser.png" alt="fig2" width="70%"/> </p> # Quilt-LlaVA Model Card ## Model details **Model type:** [Quilt-LLaVA](https://quilt-llava.github.io/) is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on histopathology educational video sourced images and GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Citation** ```bibtex @article{seyfioglu2023quilt, title={Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos}, author={Seyfioglu, Mehmet Saygin and Ikezogwo, Wisdom O and Ghezloo, Fatemeh and Krishna, Ranjay and Shapiro, Linda}, journal={arXiv preprint arXiv:2312.04746}, year={2023} } ``` **Model date:** Quilt-LlaVA-v1.5-7B was trained in November 2023. **Paper or resources for more information:** https://quilt-llava.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/quilt-llava/quilt-llava.github.io/issues ## Intended use **Primary intended uses:** The primary use of Quilt-LlaVA is research on medical large multimodal models and chatbots. **Primary intended users:** The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand the robustness, generalization, and other capabilities, biases, and constraints of large vision-language generative histopathology models. ## Training dataset - 723K filtered image-text pairs from QUILT-1M https://quilt1m.github.io/. - 107K GPT-generated multimodal instruction-following data from QUILT-Instruct https://huggingface.co/datasets/wisdomik/QUILT-LLaVA-Instruct-107K. ## Evaluation dataset A collection of 4 academic VQA histopathology benchmarks
beademiguelperez/sentence-transformers-multilingual-e5-small
beademiguelperez
"2024-03-25T14:56:59Z"
1,554
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-03-25T14:48:23Z"
--- tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: multilingual-e5-small results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 36.9996434842022 - type: f1 value: 67.95453679103099 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (de) config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.64882226980728 - type: ap value: 82.11942130026586 - type: f1 value: 69.87963421606715 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en-ext) config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8095952023988 - type: ap value: 24.46869495579561 - type: f1 value: 63.00108480037597 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (ja) config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 64.186295503212 - type: ap value: 15.496804690197042 - type: f1 value: 52.07153895475031 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 88.699325 - type: ap value: 85.27039559917269 - type: f1 value: 88.65556295032513 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.69799999999999 - type: f1 value: 43.73187348654165 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.245999999999995 - type: f1 value: 39.3863530637684 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.394 - type: f1 value: 39.301223469483446 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.864 - type: f1 value: 37.97974261868003 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (ja) config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.682 - type: f1 value: 37.07399369768313 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.504 - type: f1 value: 36.62317273874278 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 19.061 - type: map_at_10 value: 31.703 - type: map_at_100 value: 32.967 - type: map_at_1000 value: 33.001000000000005 - type: map_at_3 value: 27.466 - type: map_at_5 value: 29.564 - type: mrr_at_1 value: 19.559 - type: mrr_at_10 value: 31.874999999999996 - type: mrr_at_100 value: 33.146 - type: mrr_at_1000 value: 33.18 - type: mrr_at_3 value: 27.667 - type: mrr_at_5 value: 29.74 - type: ndcg_at_1 value: 19.061 - type: ndcg_at_10 value: 39.062999999999995 - type: ndcg_at_100 value: 45.184000000000005 - type: ndcg_at_1000 value: 46.115 - type: ndcg_at_3 value: 30.203000000000003 - type: ndcg_at_5 value: 33.953 - type: precision_at_1 value: 19.061 - type: precision_at_10 value: 6.279999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 12.706999999999999 - type: precision_at_5 value: 9.431000000000001 - type: recall_at_1 value: 19.061 - type: recall_at_10 value: 62.802 - type: recall_at_100 value: 91.323 - type: recall_at_1000 value: 98.72 - type: recall_at_3 value: 38.122 - type: recall_at_5 value: 47.155 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.22266660528253 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 30.79980849482483 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 57.8790068352054 - type: mrr value: 71.78791276436706 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.36328364043163 - type: cos_sim_spearman value: 82.26211536195868 - type: euclidean_pearson value: 80.3183865039173 - type: euclidean_spearman value: 79.88495276296132 - type: manhattan_pearson value: 80.14484480692127 - type: manhattan_spearman value: 80.39279565980743 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.0375782881002 - type: f1 value: 97.86012526096033 - type: precision value: 97.77139874739039 - type: recall value: 98.0375782881002 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (fr-en) config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 93.35241030156286 - type: f1 value: 92.66050333846944 - type: precision value: 92.3306919069631 - type: recall value: 93.35241030156286 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (ru-en) config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 94.0699688257707 - type: f1 value: 93.50236693222492 - type: precision value: 93.22791825424315 - type: recall value: 94.0699688257707 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 89.25750394944708 - type: f1 value: 88.79234684921889 - type: precision value: 88.57293312269616 - type: recall value: 89.25750394944708 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 79.41558441558442 - type: f1 value: 79.25886487487219 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.747820820329736 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 27.045143830596146 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.252999999999997 - type: map_at_10 value: 31.655916666666666 - type: map_at_100 value: 32.680749999999996 - type: map_at_1000 value: 32.79483333333334 - type: map_at_3 value: 29.43691666666666 - type: map_at_5 value: 30.717416666666665 - type: mrr_at_1 value: 28.602750000000004 - type: mrr_at_10 value: 35.56875 - type: mrr_at_100 value: 36.3595 - type: mrr_at_1000 value: 36.427749999999996 - type: mrr_at_3 value: 33.586166666666664 - type: mrr_at_5 value: 34.73641666666666 - type: ndcg_at_1 value: 28.602750000000004 - type: ndcg_at_10 value: 36.06933333333334 - type: ndcg_at_100 value: 40.70141666666667 - type: ndcg_at_1000 value: 43.24341666666667 - type: ndcg_at_3 value: 32.307916666666664 - type: ndcg_at_5 value: 34.129999999999995 - type: precision_at_1 value: 28.602750000000004 - type: precision_at_10 value: 6.097666666666667 - type: precision_at_100 value: 0.9809166666666668 - type: precision_at_1000 value: 0.13766666666666663 - type: precision_at_3 value: 14.628166666666667 - type: precision_at_5 value: 10.266916666666667 - type: recall_at_1 value: 24.252999999999997 - type: recall_at_10 value: 45.31916666666667 - type: recall_at_100 value: 66.03575000000001 - type: recall_at_1000 value: 83.94708333333334 - type: recall_at_3 value: 34.71941666666666 - type: recall_at_5 value: 39.46358333333333 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 9.024000000000001 - type: map_at_10 value: 15.644 - type: map_at_100 value: 17.154 - type: map_at_1000 value: 17.345 - type: map_at_3 value: 13.028 - type: map_at_5 value: 14.251 - type: mrr_at_1 value: 19.674 - type: mrr_at_10 value: 29.826999999999998 - type: mrr_at_100 value: 30.935000000000002 - type: mrr_at_1000 value: 30.987 - type: mrr_at_3 value: 26.645000000000003 - type: mrr_at_5 value: 28.29 - type: ndcg_at_1 value: 19.674 - type: ndcg_at_10 value: 22.545 - type: ndcg_at_100 value: 29.207 - type: ndcg_at_1000 value: 32.912 - type: ndcg_at_3 value: 17.952 - type: ndcg_at_5 value: 19.363 - type: precision_at_1 value: 19.674 - type: precision_at_10 value: 7.212000000000001 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.212 - type: precision_at_3 value: 13.507 - type: precision_at_5 value: 10.397 - type: recall_at_1 value: 9.024000000000001 - type: recall_at_10 value: 28.077999999999996 - type: recall_at_100 value: 51.403 - type: recall_at_1000 value: 72.406 - type: recall_at_3 value: 16.768 - type: recall_at_5 value: 20.737 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.012 - type: map_at_10 value: 17.138 - type: map_at_100 value: 24.146 - type: map_at_1000 value: 25.622 - type: map_at_3 value: 12.552 - type: map_at_5 value: 14.435 - type: mrr_at_1 value: 62.25000000000001 - type: mrr_at_10 value: 71.186 - type: mrr_at_100 value: 71.504 - type: mrr_at_1000 value: 71.514 - type: mrr_at_3 value: 69.333 - type: mrr_at_5 value: 70.408 - type: ndcg_at_1 value: 49.75 - type: ndcg_at_10 value: 37.76 - type: ndcg_at_100 value: 42.071 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 41.644 - type: ndcg_at_5 value: 39.812999999999995 - type: precision_at_1 value: 62.25000000000001 - type: precision_at_10 value: 30.15 - type: precision_at_100 value: 9.753 - type: precision_at_1000 value: 1.9189999999999998 - type: precision_at_3 value: 45.667 - type: precision_at_5 value: 39.15 - type: recall_at_1 value: 8.012 - type: recall_at_10 value: 22.599 - type: recall_at_100 value: 48.068 - type: recall_at_1000 value: 71.328 - type: recall_at_3 value: 14.043 - type: recall_at_5 value: 17.124 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 42.455 - type: f1 value: 37.59462649781862 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 58.092 - type: map_at_10 value: 69.586 - type: map_at_100 value: 69.968 - type: map_at_1000 value: 69.982 - type: map_at_3 value: 67.48100000000001 - type: map_at_5 value: 68.915 - type: mrr_at_1 value: 62.166 - type: mrr_at_10 value: 73.588 - type: mrr_at_100 value: 73.86399999999999 - type: mrr_at_1000 value: 73.868 - type: mrr_at_3 value: 71.6 - type: mrr_at_5 value: 72.99 - type: ndcg_at_1 value: 62.166 - type: ndcg_at_10 value: 75.27199999999999 - type: ndcg_at_100 value: 76.816 - type: ndcg_at_1000 value: 77.09700000000001 - type: ndcg_at_3 value: 71.36 - type: ndcg_at_5 value: 73.785 - type: precision_at_1 value: 62.166 - type: precision_at_10 value: 9.716 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 28.278 - type: precision_at_5 value: 18.343999999999998 - type: recall_at_1 value: 58.092 - type: recall_at_10 value: 88.73400000000001 - type: recall_at_100 value: 95.195 - type: recall_at_1000 value: 97.04599999999999 - type: recall_at_3 value: 78.45 - type: recall_at_5 value: 84.316 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 16.649 - type: map_at_10 value: 26.457000000000004 - type: map_at_100 value: 28.169 - type: map_at_1000 value: 28.352 - type: map_at_3 value: 23.305 - type: map_at_5 value: 25.169000000000004 - type: mrr_at_1 value: 32.407000000000004 - type: mrr_at_10 value: 40.922 - type: mrr_at_100 value: 41.931000000000004 - type: mrr_at_1000 value: 41.983 - type: mrr_at_3 value: 38.786 - type: mrr_at_5 value: 40.205999999999996 - type: ndcg_at_1 value: 32.407000000000004 - type: ndcg_at_10 value: 33.314 - type: ndcg_at_100 value: 40.312 - type: ndcg_at_1000 value: 43.685 - type: ndcg_at_3 value: 30.391000000000002 - type: ndcg_at_5 value: 31.525 - type: precision_at_1 value: 32.407000000000004 - type: precision_at_10 value: 8.966000000000001 - type: precision_at_100 value: 1.6019999999999999 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 14.722 - type: recall_at_1 value: 16.649 - type: recall_at_10 value: 39.117000000000004 - type: recall_at_100 value: 65.726 - type: recall_at_1000 value: 85.784 - type: recall_at_3 value: 27.914 - type: recall_at_5 value: 33.289 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 36.253 - type: map_at_10 value: 56.16799999999999 - type: map_at_100 value: 57.06099999999999 - type: map_at_1000 value: 57.126 - type: map_at_3 value: 52.644999999999996 - type: map_at_5 value: 54.909 - type: mrr_at_1 value: 72.505 - type: mrr_at_10 value: 79.66 - type: mrr_at_100 value: 79.869 - type: mrr_at_1000 value: 79.88 - type: mrr_at_3 value: 78.411 - type: mrr_at_5 value: 79.19800000000001 - type: ndcg_at_1 value: 72.505 - type: ndcg_at_10 value: 65.094 - type: ndcg_at_100 value: 68.219 - type: ndcg_at_1000 value: 69.515 - type: ndcg_at_3 value: 59.99 - type: ndcg_at_5 value: 62.909000000000006 - type: precision_at_1 value: 72.505 - type: precision_at_10 value: 13.749 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 38.357 - type: precision_at_5 value: 25.313000000000002 - type: recall_at_1 value: 36.253 - type: recall_at_10 value: 68.744 - type: recall_at_100 value: 80.925 - type: recall_at_1000 value: 89.534 - type: recall_at_3 value: 57.535000000000004 - type: recall_at_5 value: 63.282000000000004 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 80.82239999999999 - type: ap value: 75.65895781725314 - type: f1 value: 80.75880969095746 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.624 - type: map_at_10 value: 34.075 - type: map_at_100 value: 35.229 - type: map_at_1000 value: 35.276999999999994 - type: map_at_3 value: 30.245 - type: map_at_5 value: 32.42 - type: mrr_at_1 value: 22.264 - type: mrr_at_10 value: 34.638000000000005 - type: mrr_at_100 value: 35.744 - type: mrr_at_1000 value: 35.787 - type: mrr_at_3 value: 30.891000000000002 - type: mrr_at_5 value: 33.042 - type: ndcg_at_1 value: 22.264 - type: ndcg_at_10 value: 40.991 - type: ndcg_at_100 value: 46.563 - type: ndcg_at_1000 value: 47.743 - type: ndcg_at_3 value: 33.198 - type: ndcg_at_5 value: 37.069 - type: precision_at_1 value: 22.264 - type: precision_at_10 value: 6.5089999999999995 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.216999999999999 - type: precision_at_5 value: 10.487 - type: recall_at_1 value: 21.624 - type: recall_at_10 value: 62.303 - type: recall_at_100 value: 88.124 - type: recall_at_1000 value: 97.08 - type: recall_at_3 value: 41.099999999999994 - type: recall_at_5 value: 50.381 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.06703146374831 - type: f1 value: 90.86867815863172 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.46970977740209 - type: f1 value: 86.36832872036588 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.26951300867245 - type: f1 value: 88.93561193959502 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 84.22799874725963 - type: f1 value: 84.30490069236556 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (hi) config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 86.02007888131948 - type: f1 value: 85.39376041027991 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (th) config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.34900542495481 - type: f1 value: 85.39859673336713 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.078431372549 - type: f1 value: 53.45071102002276 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.85798816568047 - type: f1 value: 46.53112748993529 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.96864576384256 - type: f1 value: 45.966703022829506 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 61.31537738803633 - type: f1 value: 45.52601712835461 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (hi) config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.29616349946218 - type: f1 value: 47.24166485726613 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (th) config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.51537070524412 - type: f1 value: 49.463476319014276 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (af) config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.06792199058508 - type: f1 value: 54.094921857502285 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (am) config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.960322797579025 - type: f1 value: 48.547371223370945 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ar) config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.425016812373904 - type: f1 value: 50.47069202054312 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (az) config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.798251513113655 - type: f1 value: 57.05013069086648 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (bn) config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.37794216543376 - type: f1 value: 56.3607992649805 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (cy) config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.56018829858777 - type: f1 value: 43.87319715715134 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (da) config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.9724277067922 - type: f1 value: 59.36480066245562 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.72696704774715 - type: f1 value: 59.143595966615855 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (el) config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.5971755211836 - type: f1 value: 59.169445724946726 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.29589778076665 - type: f1 value: 67.7577001808977 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.31136516476126 - type: f1 value: 64.52032955983242 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fa) config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 61.47903120066317 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fi) config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.45595158036314 - type: f1 value: 58.0891846024637 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.47074646940149 - type: f1 value: 62.84830858877575 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (he) config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.046402151983855 - type: f1 value: 55.269074430533195 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hi) config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06523201075991 - type: f1 value: 61.35339643021369 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hu) config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.954942837928726 - type: f1 value: 57.07035922704846 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hy) config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.404169468728995 - type: f1 value: 53.94259011839138 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (id) config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.16610625420309 - type: f1 value: 61.337103431499365 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (is) config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.262945527908535 - type: f1 value: 49.7610691598921 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 63.469099018440154 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ja) config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.22797579018157 - type: f1 value: 64.89098471083001 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (jv) config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.847343644922674 - type: f1 value: 47.8536963168393 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ka) config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.45326160053799 - type: f1 value: 46.370078045805556 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (km) config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 42.83120376597175 - type: f1 value: 39.68948521599982 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (kn) config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.5084061869536 - type: f1 value: 53.961876160401545 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ko) config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.7895090786819 - type: f1 value: 61.134223684676 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (lv) config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98991257565569 - type: f1 value: 52.579862862826296 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ml) config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.90316072629456 - type: f1 value: 58.203024538290336 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (mn) config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.09818426361802 - type: f1 value: 54.22718458445455 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ms) config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.991257565568255 - type: f1 value: 55.84892781767421 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (my) config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.901143241425686 - type: f1 value: 52.25264332199797 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nb) config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.96368527236047 - type: f1 value: 58.927243876153454 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nl) config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.64223268325489 - type: f1 value: 62.340453718379706 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pl) config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.52589105581708 - type: f1 value: 61.661113187022174 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pt) config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.84599865501009 - type: f1 value: 64.59342572873005 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ro) config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.81035642232684 - type: f1 value: 57.5169089806797 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ru) config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.75991930060525 - type: f1 value: 62.89531115787938 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sl) config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.51647612642906 - type: f1 value: 54.33154780100043 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sq) config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.985877605917956 - type: f1 value: 54.46187524463802 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sv) config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.03026227303296 - type: f1 value: 62.34377392877748 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sw) config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 53.567585743106925 - type: f1 value: 50.73770655983206 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ta) config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.2595830531271 - type: f1 value: 53.657327291708626 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (te) config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.82784129119032 - type: f1 value: 54.82518072665301 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (th) config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06859448554137 - type: f1 value: 63.00185280500495 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tl) config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.91055817081371 - type: f1 value: 55.54116301224262 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tr) config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.54404841963686 - type: f1 value: 59.57650946030184 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ur) config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.27706792199059 - type: f1 value: 56.50010066083435 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (vi) config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.0719569603228 - type: f1 value: 61.817075925647956 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.23806321452591 - type: f1 value: 65.24917026029749 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-TW) config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.53530598520511 - type: f1 value: 61.71131132295768 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (af) config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.04303967720243 - type: f1 value: 60.3950085685985 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (am) config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.83591123066578 - type: f1 value: 54.95059828830849 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ar) config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.62340282447881 - type: f1 value: 59.525159996498225 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (az) config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.85406859448555 - type: f1 value: 59.129299095681276 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (bn) config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.76731674512441 - type: f1 value: 61.159560612627715 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (cy) config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.181573638197705 - type: f1 value: 46.98422176289957 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (da) config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.92737054472092 - type: f1 value: 67.69135611952979 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.18964357767318 - type: f1 value: 68.46106138186214 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (el) config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.0712844653665 - type: f1 value: 66.75545422473901 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4754539340955 - type: f1 value: 74.38427146553252 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.82515131136518 - type: f1 value: 69.63516462173847 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fa) config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.70880968392737 - type: f1 value: 67.45420662567926 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fi) config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.95494283792871 - type: f1 value: 65.06191009049222 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.75924680564896 - type: f1 value: 68.30833379585945 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (he) config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.806321452589096 - type: f1 value: 63.273048243765054 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hi) config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.68997982515133 - type: f1 value: 66.54703855381324 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hu) config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.46940147948891 - type: f1 value: 65.91017343463396 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hy) config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.49899125756556 - type: f1 value: 57.90333469917769 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (id) config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.9219905850706 - type: f1 value: 67.23169403762938 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (is) config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.486213853396094 - type: f1 value: 54.85282355583758 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.04169468728985 - type: f1 value: 68.83833333320462 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ja) config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.88702084734365 - type: f1 value: 74.04474735232299 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (jv) config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.63416274377943 - type: f1 value: 55.11332211687954 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ka) config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.23604572965702 - type: f1 value: 50.86529813991055 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (km) config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.62407531943511 - type: f1 value: 43.63485467164535 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (kn) config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.15601882985878 - type: f1 value: 57.522837510959924 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ko) config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.84532616005382 - type: f1 value: 69.60021127179697 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (lv) config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.65770006724949 - type: f1 value: 55.84219135523227 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ml) config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.53665097511768 - type: f1 value: 65.09087787792639 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (mn) config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.31405514458642 - type: f1 value: 58.06135303831491 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ms) config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.88231338264964 - type: f1 value: 62.751099407787926 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (my) config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.86012104909213 - type: f1 value: 56.29118323058282 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nb) config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.37390719569602 - type: f1 value: 66.27922244885102 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nl) config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.8675184936113 - type: f1 value: 70.22146529932019 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pl) config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.2212508406187 - type: f1 value: 67.77454802056282 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pt) config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.18090114324143 - type: f1 value: 68.03737625431621 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ro) config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.65030262273034 - type: f1 value: 63.792945486912856 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ru) config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.48217888365838 - type: f1 value: 69.96028997292197 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sl) config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.17821116341627 - type: f1 value: 59.3935969827171 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sq) config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.86146603900471 - type: f1 value: 60.133692735032376 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sv) config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.89441829186282 - type: f1 value: 70.03064076194089 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sw) config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.15063887020847 - type: f1 value: 56.23326278499678 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ta) config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.43846671149966 - type: f1 value: 57.70440450281974 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (te) config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8507061197041 - type: f1 value: 59.22916396061171 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (th) config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.65568258238063 - type: f1 value: 69.90736239440633 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tl) config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8843308675185 - type: f1 value: 59.30332663713599 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tr) config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.05312710154674 - type: f1 value: 67.44024062594775 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ur) config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.111634162743776 - type: f1 value: 60.89083013084519 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (vi) config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.44115669132482 - type: f1 value: 67.92227541674552 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4687289845326 - type: f1 value: 74.16376793486025 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-TW) config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.31876260928043 - type: f1 value: 68.5246745215607 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.90431696479766 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.259158476693774 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.28445330838555 - type: mrr value: 31.15758529581164 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.353 - type: map_at_10 value: 11.565 - type: map_at_100 value: 14.097000000000001 - type: map_at_1000 value: 15.354999999999999 - type: map_at_3 value: 8.749 - type: map_at_5 value: 9.974 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.589 - type: mrr_at_100 value: 51.187000000000005 - type: mrr_at_1000 value: 51.233 - type: mrr_at_3 value: 48.246 - type: mrr_at_5 value: 49.546 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 31.009999999999998 - type: ndcg_at_100 value: 28.026 - type: ndcg_at_1000 value: 36.905 - type: ndcg_at_3 value: 35.983 - type: ndcg_at_5 value: 33.764 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.786 - type: precision_at_100 value: 6.916 - type: precision_at_1000 value: 1.981 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 28.731 - type: recall_at_1 value: 5.353 - type: recall_at_10 value: 15.039 - type: recall_at_100 value: 27.348 - type: recall_at_1000 value: 59.453 - type: recall_at_3 value: 9.792 - type: recall_at_5 value: 11.882 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 33.852 - type: map_at_10 value: 48.924 - type: map_at_100 value: 49.854 - type: map_at_1000 value: 49.886 - type: map_at_3 value: 44.9 - type: map_at_5 value: 47.387 - type: mrr_at_1 value: 38.035999999999994 - type: mrr_at_10 value: 51.644 - type: mrr_at_100 value: 52.339 - type: mrr_at_1000 value: 52.35999999999999 - type: mrr_at_3 value: 48.421 - type: mrr_at_5 value: 50.468999999999994 - type: ndcg_at_1 value: 38.007000000000005 - type: ndcg_at_10 value: 56.293000000000006 - type: ndcg_at_100 value: 60.167 - type: ndcg_at_1000 value: 60.916000000000004 - type: ndcg_at_3 value: 48.903999999999996 - type: ndcg_at_5 value: 52.978 - type: precision_at_1 value: 38.007000000000005 - type: precision_at_10 value: 9.041 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 22.084 - type: precision_at_5 value: 15.608 - type: recall_at_1 value: 33.852 - type: recall_at_10 value: 75.893 - type: recall_at_100 value: 92.589 - type: recall_at_1000 value: 98.153 - type: recall_at_3 value: 56.969 - type: recall_at_5 value: 66.283 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.174 - type: map_at_10 value: 82.891 - type: map_at_100 value: 83.545 - type: map_at_1000 value: 83.56700000000001 - type: map_at_3 value: 79.944 - type: map_at_5 value: 81.812 - type: mrr_at_1 value: 79.67999999999999 - type: mrr_at_10 value: 86.279 - type: mrr_at_100 value: 86.39 - type: mrr_at_1000 value: 86.392 - type: mrr_at_3 value: 85.21 - type: mrr_at_5 value: 85.92999999999999 - type: ndcg_at_1 value: 79.69000000000001 - type: ndcg_at_10 value: 86.929 - type: ndcg_at_100 value: 88.266 - type: ndcg_at_1000 value: 88.428 - type: ndcg_at_3 value: 83.899 - type: ndcg_at_5 value: 85.56700000000001 - type: precision_at_1 value: 79.69000000000001 - type: precision_at_10 value: 13.161000000000001 - type: precision_at_100 value: 1.513 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.603 - type: precision_at_5 value: 24.138 - type: recall_at_1 value: 69.174 - type: recall_at_10 value: 94.529 - type: recall_at_100 value: 99.15 - type: recall_at_1000 value: 99.925 - type: recall_at_3 value: 85.86200000000001 - type: recall_at_5 value: 90.501 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 39.13064340585255 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 58.97884249325877 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 3.4680000000000004 - type: map_at_10 value: 7.865 - type: map_at_100 value: 9.332 - type: map_at_1000 value: 9.587 - type: map_at_3 value: 5.800000000000001 - type: map_at_5 value: 6.8790000000000004 - type: mrr_at_1 value: 17.0 - type: mrr_at_10 value: 25.629 - type: mrr_at_100 value: 26.806 - type: mrr_at_1000 value: 26.889000000000003 - type: mrr_at_3 value: 22.8 - type: mrr_at_5 value: 24.26 - type: ndcg_at_1 value: 17.0 - type: ndcg_at_10 value: 13.895 - type: ndcg_at_100 value: 20.491999999999997 - type: ndcg_at_1000 value: 25.759999999999998 - type: ndcg_at_3 value: 13.347999999999999 - type: ndcg_at_5 value: 11.61 - type: precision_at_1 value: 17.0 - type: precision_at_10 value: 7.090000000000001 - type: precision_at_100 value: 1.669 - type: precision_at_1000 value: 0.294 - type: precision_at_3 value: 12.3 - type: precision_at_5 value: 10.02 - type: recall_at_1 value: 3.4680000000000004 - type: recall_at_10 value: 14.363000000000001 - type: recall_at_100 value: 33.875 - type: recall_at_1000 value: 59.711999999999996 - type: recall_at_3 value: 7.483 - type: recall_at_5 value: 10.173 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04084311714061 - type: cos_sim_spearman value: 77.51342467443078 - type: euclidean_pearson value: 80.0321166028479 - type: euclidean_spearman value: 77.29249114733226 - type: manhattan_pearson value: 80.03105964262431 - type: manhattan_spearman value: 77.22373689514794 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.1680158034387 - type: cos_sim_spearman value: 76.55983344071117 - type: euclidean_pearson value: 79.75266678300143 - type: euclidean_spearman value: 75.34516823467025 - type: manhattan_pearson value: 79.75959151517357 - type: manhattan_spearman value: 75.42330344141912 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 76.48898993209346 - type: cos_sim_spearman value: 76.96954120323366 - type: euclidean_pearson value: 76.94139109279668 - type: euclidean_spearman value: 76.85860283201711 - type: manhattan_pearson value: 76.6944095091912 - type: manhattan_spearman value: 76.61096912972553 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 77.85082366246944 - type: cos_sim_spearman value: 75.52053350101731 - type: euclidean_pearson value: 77.1165845070926 - type: euclidean_spearman value: 75.31216065884388 - type: manhattan_pearson value: 77.06193941833494 - type: manhattan_spearman value: 75.31003701700112 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.36305246526497 - type: cos_sim_spearman value: 87.11704613927415 - type: euclidean_pearson value: 86.04199125810939 - type: euclidean_spearman value: 86.51117572414263 - type: manhattan_pearson value: 86.0805106816633 - type: manhattan_spearman value: 86.52798366512229 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.18536255599724 - type: cos_sim_spearman value: 83.63377151025418 - type: euclidean_pearson value: 83.24657467993141 - type: euclidean_spearman value: 84.02751481993825 - type: manhattan_pearson value: 83.11941806582371 - type: manhattan_spearman value: 83.84251281019304 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ko-ko) config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.95816528475514 - type: cos_sim_spearman value: 78.86607380120462 - type: euclidean_pearson value: 78.51268699230545 - type: euclidean_spearman value: 79.11649316502229 - type: manhattan_pearson value: 78.32367302808157 - type: manhattan_spearman value: 78.90277699624637 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ar-ar) config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.89126914997624 - type: cos_sim_spearman value: 73.0296921832678 - type: euclidean_pearson value: 71.50385903677738 - type: euclidean_spearman value: 73.13368899716289 - type: manhattan_pearson value: 71.47421463379519 - type: manhattan_spearman value: 73.03383242946575 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-ar) config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 59.22923684492637 - type: cos_sim_spearman value: 57.41013211368396 - type: euclidean_pearson value: 61.21107388080905 - type: euclidean_spearman value: 60.07620768697254 - type: manhattan_pearson value: 59.60157142786555 - type: manhattan_spearman value: 59.14069604103739 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-de) config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.24345978774299 - type: cos_sim_spearman value: 77.24225743830719 - type: euclidean_pearson value: 76.66226095469165 - type: euclidean_spearman value: 77.60708820493146 - type: manhattan_pearson value: 76.05303324760429 - type: manhattan_spearman value: 76.96353149912348 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.50879160160852 - type: cos_sim_spearman value: 86.43594662965224 - type: euclidean_pearson value: 86.06846012826577 - type: euclidean_spearman value: 86.02041395794136 - type: manhattan_pearson value: 86.10916255616904 - type: manhattan_spearman value: 86.07346068198953 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-tr) config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 58.39803698977196 - type: cos_sim_spearman value: 55.96910950423142 - type: euclidean_pearson value: 58.17941175613059 - type: euclidean_spearman value: 55.03019330522745 - type: manhattan_pearson value: 57.333358138183286 - type: manhattan_spearman value: 54.04614023149965 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-en) config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 70.98304089637197 - type: cos_sim_spearman value: 72.44071656215888 - type: euclidean_pearson value: 72.19224359033983 - type: euclidean_spearman value: 73.89871188913025 - type: manhattan_pearson value: 71.21098311547406 - type: manhattan_spearman value: 72.93405764824821 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-es) config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.99792397466308 - type: cos_sim_spearman value: 84.83824377879495 - type: euclidean_pearson value: 85.70043288694438 - type: euclidean_spearman value: 84.70627558703686 - type: manhattan_pearson value: 85.89570850150801 - type: manhattan_spearman value: 84.95806105313007 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (fr-en) config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.21850322994712 - type: cos_sim_spearman value: 72.28669398117248 - type: euclidean_pearson value: 73.40082510412948 - type: euclidean_spearman value: 73.0326539281865 - type: manhattan_pearson value: 71.8659633964841 - type: manhattan_spearman value: 71.57817425823303 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (it-en) config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.80921368595645 - type: cos_sim_spearman value: 77.33209091229315 - type: euclidean_pearson value: 76.53159540154829 - type: euclidean_spearman value: 78.17960842810093 - type: manhattan_pearson value: 76.13530186637601 - type: manhattan_spearman value: 78.00701437666875 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (nl-en) config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 74.74980608267349 - type: cos_sim_spearman value: 75.37597374318821 - type: euclidean_pearson value: 74.90506081911661 - type: euclidean_spearman value: 75.30151613124521 - type: manhattan_pearson value: 74.62642745918002 - type: manhattan_spearman value: 75.18619716592303 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.632662289205584 - type: cos_sim_spearman value: 60.938543391610914 - type: euclidean_pearson value: 62.113200529767056 - type: euclidean_spearman value: 61.410312633261164 - type: manhattan_pearson value: 61.75494698945686 - type: manhattan_spearman value: 60.92726195322362 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de) config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 45.283470551557244 - type: cos_sim_spearman value: 53.44833015864201 - type: euclidean_pearson value: 41.17892011120893 - type: euclidean_spearman value: 53.81441383126767 - type: manhattan_pearson value: 41.17482200420659 - type: manhattan_spearman value: 53.82180269276363 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es) config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 60.5069165306236 - type: cos_sim_spearman value: 66.87803259033826 - type: euclidean_pearson value: 63.5428979418236 - type: euclidean_spearman value: 66.9293576586897 - type: manhattan_pearson value: 63.59789526178922 - type: manhattan_spearman value: 66.86555009875066 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl) config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 28.23026196280264 - type: cos_sim_spearman value: 35.79397812652861 - type: euclidean_pearson value: 17.828102102767353 - type: euclidean_spearman value: 35.721501145568894 - type: manhattan_pearson value: 17.77134274219677 - type: manhattan_spearman value: 35.98107902846267 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (tr) config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.51946541393812 - type: cos_sim_spearman value: 63.714686006214485 - type: euclidean_pearson value: 58.32104651305898 - type: euclidean_spearman value: 62.237110895702216 - type: manhattan_pearson value: 58.579416468759185 - type: manhattan_spearman value: 62.459738981727 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ar) config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.76009839569795 - type: cos_sim_spearman value: 56.65188431953149 - type: euclidean_pearson value: 50.997682160915595 - type: euclidean_spearman value: 55.99910008818135 - type: manhattan_pearson value: 50.76220659606342 - type: manhattan_spearman value: 55.517347595391456 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ru) config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.232731157702425 - type: cos_sim_spearman value: 59.89531877658345 - type: euclidean_pearson value: 49.937914570348376 - type: euclidean_spearman value: 60.220905659334036 - type: manhattan_pearson value: 50.00987996844193 - type: manhattan_spearman value: 60.081341480977926 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.717524559088005 - type: cos_sim_spearman value: 66.83570886252286 - type: euclidean_pearson value: 58.41338625505467 - type: euclidean_spearman value: 66.68991427704938 - type: manhattan_pearson value: 58.78638572916807 - type: manhattan_spearman value: 66.58684161046335 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 73.2962042954962 - type: cos_sim_spearman value: 76.58255504852025 - type: euclidean_pearson value: 75.70983192778257 - type: euclidean_spearman value: 77.4547684870542 - type: manhattan_pearson value: 75.75565853870485 - type: manhattan_spearman value: 76.90208974949428 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-en) config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.47396266924846 - type: cos_sim_spearman value: 56.492267162048606 - type: euclidean_pearson value: 55.998505203070195 - type: euclidean_spearman value: 56.46447012960222 - type: manhattan_pearson value: 54.873172394430995 - type: manhattan_spearman value: 56.58111534551218 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-en) config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.87177267688686 - type: cos_sim_spearman value: 74.57160943395763 - type: euclidean_pearson value: 70.88330406826788 - type: euclidean_spearman value: 74.29767636038422 - type: manhattan_pearson value: 71.38245248369536 - type: manhattan_spearman value: 74.53102232732175 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.80225656959544 - type: cos_sim_spearman value: 76.52646173725735 - type: euclidean_pearson value: 73.95710720200799 - type: euclidean_spearman value: 76.54040031984111 - type: manhattan_pearson value: 73.89679971946774 - type: manhattan_spearman value: 76.60886958161574 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl-en) config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 70.70844249898789 - type: cos_sim_spearman value: 72.68571783670241 - type: euclidean_pearson value: 72.38800772441031 - type: euclidean_spearman value: 72.86804422703312 - type: manhattan_pearson value: 71.29840508203515 - type: manhattan_spearman value: 71.86264441749513 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh-en) config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 58.647478923935694 - type: cos_sim_spearman value: 63.74453623540931 - type: euclidean_pearson value: 59.60138032437505 - type: euclidean_spearman value: 63.947930832166065 - type: manhattan_pearson value: 58.59735509491861 - type: manhattan_spearman value: 62.082503844627404 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-it) config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.8722516867162 - type: cos_sim_spearman value: 71.81208592523012 - type: euclidean_pearson value: 67.95315252165956 - type: euclidean_spearman value: 73.00749822046009 - type: manhattan_pearson value: 68.07884688638924 - type: manhattan_spearman value: 72.34210325803069 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-fr) config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.5405814240949 - type: cos_sim_spearman value: 60.56838649023775 - type: euclidean_pearson value: 53.011731611314104 - type: euclidean_spearman value: 58.533194841668426 - type: manhattan_pearson value: 53.623067729338494 - type: manhattan_spearman value: 58.018756154446926 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-pl) config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 13.611046866216112 - type: cos_sim_spearman value: 28.238192909158492 - type: euclidean_pearson value: 22.16189199885129 - type: euclidean_spearman value: 35.012895679076564 - type: manhattan_pearson value: 21.969771178698387 - type: manhattan_spearman value: 32.456985088607475 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr-pl) config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 74.58077407011655 - type: cos_sim_spearman value: 84.51542547285167 - type: euclidean_pearson value: 74.64613843596234 - type: euclidean_spearman value: 84.51542547285167 - type: manhattan_pearson value: 75.15335973101396 - type: manhattan_spearman value: 84.51542547285167 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.0739825531578 - type: cos_sim_spearman value: 84.01057479311115 - type: euclidean_pearson value: 83.85453227433344 - type: euclidean_spearman value: 84.01630226898655 - type: manhattan_pearson value: 83.75323603028978 - type: manhattan_spearman value: 83.89677983727685 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.12945623123957 - type: mrr value: 93.87738713719106 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 52.983000000000004 - type: map_at_10 value: 62.946000000000005 - type: map_at_100 value: 63.514 - type: map_at_1000 value: 63.554 - type: map_at_3 value: 60.183 - type: map_at_5 value: 61.672000000000004 - type: mrr_at_1 value: 55.667 - type: mrr_at_10 value: 64.522 - type: mrr_at_100 value: 64.957 - type: mrr_at_1000 value: 64.995 - type: mrr_at_3 value: 62.388999999999996 - type: mrr_at_5 value: 63.639 - type: ndcg_at_1 value: 55.667 - type: ndcg_at_10 value: 67.704 - type: ndcg_at_100 value: 70.299 - type: ndcg_at_1000 value: 71.241 - type: ndcg_at_3 value: 62.866 - type: ndcg_at_5 value: 65.16999999999999 - type: precision_at_1 value: 55.667 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 24.444 - type: precision_at_5 value: 16.133 - type: recall_at_1 value: 52.983000000000004 - type: recall_at_10 value: 80.656 - type: recall_at_100 value: 92.5 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 67.744 - type: recall_at_5 value: 73.433 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72772277227723 - type: cos_sim_ap value: 92.17845897992215 - type: cos_sim_f1 value: 85.9746835443038 - type: cos_sim_precision value: 87.07692307692308 - type: cos_sim_recall value: 84.89999999999999 - type: dot_accuracy value: 99.3039603960396 - type: dot_ap value: 60.70244020124878 - type: dot_f1 value: 59.92742353551063 - type: dot_precision value: 62.21743810548978 - type: dot_recall value: 57.8 - type: euclidean_accuracy value: 99.71683168316832 - type: euclidean_ap value: 91.53997039964659 - type: euclidean_f1 value: 84.88372093023257 - type: euclidean_precision value: 90.02242152466367 - type: euclidean_recall value: 80.30000000000001 - type: manhattan_accuracy value: 99.72376237623763 - type: manhattan_ap value: 91.80756777790289 - type: manhattan_f1 value: 85.48468106479157 - type: manhattan_precision value: 85.8728557013118 - type: manhattan_recall value: 85.1 - type: max_accuracy value: 99.72772277227723 - type: max_ap value: 92.17845897992215 - type: max_f1 value: 85.9746835443038 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 53.52464042600003 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.071631948736 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.19552407604654 - type: mrr value: 49.95269130379425 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.345293033095427 - type: cos_sim_spearman value: 29.976931423258403 - type: dot_pearson value: 27.047078008958408 - type: dot_spearman value: 27.75894368380218 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.706 - type: map_at_100 value: 9.634 - type: map_at_1000 value: 23.665 - type: map_at_3 value: 0.5950000000000001 - type: map_at_5 value: 0.95 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 80.0 - type: ndcg_at_10 value: 72.573 - type: ndcg_at_100 value: 53.954 - type: ndcg_at_1000 value: 47.760999999999996 - type: ndcg_at_3 value: 76.173 - type: ndcg_at_5 value: 75.264 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 76.4 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.802 - type: precision_at_3 value: 81.333 - type: precision_at_5 value: 80.4 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 1.925 - type: recall_at_100 value: 12.762 - type: recall_at_1000 value: 44.946000000000005 - type: recall_at_3 value: 0.634 - type: recall_at_5 value: 1.051 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (sqi-eng) config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.0 - type: f1 value: 88.55666666666666 - type: precision value: 87.46166666666667 - type: recall value: 91.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fry-eng) config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.22543352601156 - type: f1 value: 51.03220478943021 - type: precision value: 48.8150289017341 - type: recall value: 57.22543352601156 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kur-eng) config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.58536585365854 - type: f1 value: 39.66870798578116 - type: precision value: 37.416085946573745 - type: recall value: 46.58536585365854 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tur-eng) config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.7 - type: f1 value: 86.77999999999999 - type: precision value: 85.45333333333332 - type: recall value: 89.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (deu-eng) config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.58333333333331 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nld-eng) config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.3 - type: precision value: 89.31666666666668 - type: recall value: 92.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ron-eng) config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.67190476190476 - type: precision value: 82.23333333333332 - type: recall value: 86.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ang-eng) config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 50.0 - type: f1 value: 42.23229092632078 - type: precision value: 39.851634683724235 - type: recall value: 50.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ido-eng) config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.3 - type: f1 value: 70.86190476190477 - type: precision value: 68.68777777777777 - type: recall value: 76.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jav-eng) config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.073170731707314 - type: f1 value: 50.658958927251604 - type: precision value: 48.26480836236933 - type: recall value: 57.073170731707314 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (isl-eng) config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.2 - type: f1 value: 62.156507936507936 - type: precision value: 59.84964285714286 - type: recall value: 68.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slv-eng) config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.52126366950182 - type: f1 value: 72.8496210148701 - type: precision value: 70.92171498003819 - type: recall value: 77.52126366950182 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cym-eng) config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.78260869565217 - type: f1 value: 65.32422360248447 - type: precision value: 63.063067367415194 - type: recall value: 70.78260869565217 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kaz-eng) config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.43478260869566 - type: f1 value: 73.02608695652172 - type: precision value: 70.63768115942028 - type: recall value: 78.43478260869566 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (est-eng) config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.9 - type: f1 value: 55.309753694581275 - type: precision value: 53.130476190476195 - type: recall value: 60.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (heb-eng) config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.89999999999999 - type: f1 value: 67.92023809523809 - type: precision value: 65.82595238095237 - type: recall value: 72.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gla-eng) config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.80337756332931 - type: f1 value: 39.42174900558496 - type: precision value: 36.97101116280851 - type: recall value: 46.80337756332931 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mar-eng) config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.8 - type: f1 value: 86.79 - type: precision value: 85.375 - type: recall value: 89.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lat-eng) config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.199999999999996 - type: f1 value: 39.95484348984349 - type: precision value: 37.561071428571424 - type: recall value: 47.199999999999996 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bel-eng) config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.8 - type: f1 value: 84.68190476190475 - type: precision value: 83.275 - type: recall value: 87.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pms-eng) config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.76190476190476 - type: f1 value: 42.14965986394558 - type: precision value: 39.96743626743626 - type: recall value: 48.76190476190476 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gle-eng) config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.10000000000001 - type: f1 value: 59.58580086580086 - type: precision value: 57.150238095238095 - type: recall value: 66.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pes-eng) config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.3 - type: f1 value: 84.0 - type: precision value: 82.48666666666666 - type: recall value: 87.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nob-eng) config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 87.79523809523809 - type: precision value: 86.6 - type: recall value: 90.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bul-eng) config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.0 - type: f1 value: 83.81 - type: precision value: 82.36666666666666 - type: recall value: 87.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cbk-eng) config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.9 - type: f1 value: 57.76533189033189 - type: precision value: 55.50595238095239 - type: recall value: 63.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hun-eng) config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.1 - type: f1 value: 71.83690476190478 - type: precision value: 70.04928571428573 - type: recall value: 76.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uig-eng) config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.3 - type: f1 value: 59.32626984126984 - type: precision value: 56.62535714285713 - type: recall value: 66.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (rus-eng) config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.60000000000001 - type: f1 value: 87.96333333333334 - type: precision value: 86.73333333333333 - type: recall value: 90.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (spa-eng) config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.16666666666666 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hye-eng) config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.71428571428571 - type: f1 value: 82.29142600436403 - type: precision value: 80.8076626877166 - type: recall value: 85.71428571428571 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tel-eng) config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.88888888888889 - type: f1 value: 85.7834757834758 - type: precision value: 84.43732193732193 - type: recall value: 88.88888888888889 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (afr-eng) config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.5 - type: f1 value: 85.67190476190476 - type: precision value: 84.43333333333332 - type: recall value: 88.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mon-eng) config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.72727272727273 - type: f1 value: 78.21969696969695 - type: precision value: 76.18181818181819 - type: recall value: 82.72727272727273 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arz-eng) config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 61.0062893081761 - type: f1 value: 55.13976240391334 - type: precision value: 52.92112499659669 - type: recall value: 61.0062893081761 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hrv-eng) config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.5 - type: f1 value: 86.86666666666666 - type: precision value: 85.69166666666668 - type: recall value: 89.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nov-eng) config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.54085603112841 - type: f1 value: 68.56031128404669 - type: precision value: 66.53047989623866 - type: recall value: 73.54085603112841 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gsw-eng) config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.58974358974359 - type: f1 value: 36.45299145299145 - type: precision value: 33.81155881155882 - type: recall value: 43.58974358974359 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nds-eng) config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.599999999999994 - type: f1 value: 53.264689754689755 - type: precision value: 50.869166666666665 - type: recall value: 59.599999999999994 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ukr-eng) config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.2 - type: f1 value: 81.61666666666665 - type: precision value: 80.02833333333335 - type: recall value: 85.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uzb-eng) config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.78504672897196 - type: f1 value: 58.00029669188548 - type: precision value: 55.815809968847354 - type: recall value: 63.78504672897196 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lit-eng) config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.5 - type: f1 value: 61.518333333333345 - type: precision value: 59.622363699102834 - type: recall value: 66.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ina-eng) config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.6 - type: f1 value: 85.60222222222221 - type: precision value: 84.27916666666665 - type: recall value: 88.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lfn-eng) config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.699999999999996 - type: f1 value: 52.732375957375965 - type: precision value: 50.63214035964035 - type: recall value: 58.699999999999996 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (zsm-eng) config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.99666666666667 - type: precision value: 89.03333333333333 - type: recall value: 92.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ita-eng) config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.10000000000001 - type: f1 value: 87.55666666666667 - type: precision value: 86.36166666666668 - type: recall value: 90.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cmn-eng) config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 88.89000000000001 - type: precision value: 87.71166666666666 - type: recall value: 91.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lvs-eng) config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.7 - type: f1 value: 60.67427750410509 - type: precision value: 58.71785714285714 - type: recall value: 65.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (glg-eng) config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.39999999999999 - type: f1 value: 81.93190476190475 - type: precision value: 80.37833333333333 - type: recall value: 85.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ceb-eng) config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.833333333333336 - type: f1 value: 42.006625781625786 - type: precision value: 40.077380952380956 - type: recall value: 47.833333333333336 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bre-eng) config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.4 - type: f1 value: 8.24465007215007 - type: precision value: 7.664597069597071 - type: recall value: 10.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ben-eng) config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.6 - type: f1 value: 77.76333333333334 - type: precision value: 75.57833333333332 - type: recall value: 82.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swg-eng) config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.67857142857143 - type: f1 value: 44.302721088435376 - type: precision value: 41.49801587301587 - type: recall value: 52.67857142857143 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arq-eng) config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.3205268935236 - type: f1 value: 22.426666605171157 - type: precision value: 20.685900116470915 - type: recall value: 28.3205268935236 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kab-eng) config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 22.7 - type: f1 value: 17.833970473970474 - type: precision value: 16.407335164835164 - type: recall value: 22.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fra-eng) config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.2 - type: f1 value: 89.92999999999999 - type: precision value: 88.87 - type: recall value: 92.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (por-eng) config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 89.25 - type: precision value: 88.21666666666667 - type: recall value: 91.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tat-eng) config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.19999999999999 - type: f1 value: 63.38269841269841 - type: precision value: 61.14773809523809 - type: recall value: 69.19999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (oci-eng) config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.8 - type: f1 value: 42.839915639915645 - type: precision value: 40.770287114845935 - type: recall value: 48.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pol-eng) config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.8 - type: f1 value: 85.90666666666668 - type: precision value: 84.54166666666666 - type: recall value: 88.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (war-eng) config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.6 - type: f1 value: 40.85892920804686 - type: precision value: 38.838223114604695 - type: recall value: 46.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (aze-eng) config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.0 - type: f1 value: 80.14190476190475 - type: precision value: 78.45333333333333 - type: recall value: 84.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (vie-eng) config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.5 - type: f1 value: 87.78333333333333 - type: precision value: 86.5 - type: recall value: 90.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nno-eng) config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.5 - type: f1 value: 69.48397546897547 - type: precision value: 67.51869047619049 - type: recall value: 74.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cha-eng) config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.846715328467155 - type: f1 value: 27.828177499710343 - type: precision value: 26.63451511991658 - type: recall value: 32.846715328467155 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mhr-eng) config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.0 - type: f1 value: 6.07664116764988 - type: precision value: 5.544177607179943 - type: recall value: 8.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dan-eng) config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.38555555555554 - type: precision value: 82.91583333333334 - type: recall value: 87.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ell-eng) config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 84.08333333333331 - type: precision value: 82.47333333333333 - type: recall value: 87.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (amh-eng) config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.95238095238095 - type: f1 value: 76.13095238095238 - type: precision value: 74.05753968253967 - type: recall value: 80.95238095238095 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pam-eng) config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.799999999999999 - type: f1 value: 6.971422975172975 - type: precision value: 6.557814916172301 - type: recall value: 8.799999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hsb-eng) config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.099378881987576 - type: f1 value: 37.01649742022413 - type: precision value: 34.69420618488942 - type: recall value: 44.099378881987576 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (srp-eng) config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.32666666666667 - type: precision value: 78.60666666666665 - type: recall value: 84.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (epo-eng) config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.5 - type: f1 value: 90.49666666666666 - type: precision value: 89.56666666666668 - type: recall value: 92.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kzj-eng) config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.0 - type: f1 value: 8.268423529875141 - type: precision value: 7.878118605532398 - type: recall value: 10.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (awa-eng) config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.22077922077922 - type: f1 value: 74.27128427128426 - type: precision value: 72.28715728715729 - type: recall value: 79.22077922077922 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fao-eng) config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.64885496183206 - type: f1 value: 58.87495456197747 - type: precision value: 55.992366412213734 - type: recall value: 65.64885496183206 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mal-eng) config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.06986899563319 - type: f1 value: 94.78408539543909 - type: precision value: 94.15332362930616 - type: recall value: 96.06986899563319 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ile-eng) config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.2 - type: f1 value: 71.72571428571428 - type: precision value: 69.41000000000001 - type: recall value: 77.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bos-eng) config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.4406779661017 - type: f1 value: 83.2391713747646 - type: precision value: 81.74199623352166 - type: recall value: 86.4406779661017 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cor-eng) config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.4 - type: f1 value: 6.017828743398003 - type: precision value: 5.4829865484756795 - type: recall value: 8.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cat-eng) config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.5 - type: f1 value: 79.74833333333333 - type: precision value: 78.04837662337664 - type: recall value: 83.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (eus-eng) config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.4 - type: f1 value: 54.467301587301584 - type: precision value: 52.23242424242424 - type: recall value: 60.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yue-eng) config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.9 - type: f1 value: 69.68699134199134 - type: precision value: 67.59873015873016 - type: recall value: 74.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swe-eng) config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.9652380952381 - type: precision value: 83.66166666666666 - type: recall value: 88.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dtp-eng) config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.1 - type: f1 value: 7.681244588744588 - type: precision value: 7.370043290043291 - type: recall value: 9.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kat-eng) config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.9651474530831 - type: f1 value: 76.84220605132133 - type: precision value: 75.19606398962966 - type: recall value: 80.9651474530831 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jpn-eng) config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.705 - type: precision value: 82.3120634920635 - type: recall value: 86.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (csb-eng) config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 23.98763072676116 - type: precision value: 22.506399397703746 - type: recall value: 29.64426877470356 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (xho-eng) config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.4225352112676 - type: f1 value: 62.84037558685445 - type: precision value: 59.56572769953053 - type: recall value: 70.4225352112676 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (orv-eng) config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.64071856287425 - type: f1 value: 15.125271011207756 - type: precision value: 13.865019261197494 - type: recall value: 19.64071856287425 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ind-eng) config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.80666666666666 - type: precision value: 86.70833333333331 - type: recall value: 90.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tuk-eng) config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 23.15270935960591 - type: f1 value: 18.407224958949097 - type: precision value: 16.982385430661292 - type: recall value: 23.15270935960591 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (max-eng) config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.98591549295775 - type: f1 value: 49.94718309859154 - type: precision value: 47.77864154624717 - type: recall value: 55.98591549295775 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swh-eng) config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.07692307692307 - type: f1 value: 66.74358974358974 - type: precision value: 64.06837606837607 - type: recall value: 73.07692307692307 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hin-eng) config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.25 - type: precision value: 92.43333333333332 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dsb-eng) config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.78705636743215 - type: f1 value: 31.63899658680452 - type: precision value: 29.72264397629742 - type: recall value: 37.78705636743215 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ber-eng) config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.6 - type: f1 value: 16.91697302697303 - type: precision value: 15.71225147075147 - type: recall value: 21.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tam-eng) config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.01628664495115 - type: f1 value: 81.38514037536838 - type: precision value: 79.83170466883823 - type: recall value: 85.01628664495115 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slk-eng) config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.39999999999999 - type: f1 value: 79.96380952380952 - type: precision value: 78.48333333333333 - type: recall value: 83.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tgl-eng) config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.2 - type: f1 value: 79.26190476190476 - type: precision value: 77.58833333333334 - type: recall value: 83.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ast-eng) config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.59055118110236 - type: f1 value: 71.66854143232096 - type: precision value: 70.30183727034121 - type: recall value: 75.59055118110236 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mkd-eng) config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.5 - type: f1 value: 59.26095238095238 - type: precision value: 56.81909090909092 - type: recall value: 65.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (khm-eng) config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.26315789473685 - type: f1 value: 47.986523325858506 - type: precision value: 45.33950006595436 - type: recall value: 55.26315789473685 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ces-eng) config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.89999999999999 - type: f1 value: 78.835 - type: precision value: 77.04761904761905 - type: recall value: 82.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tzl-eng) config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.269230769230774 - type: f1 value: 36.20421245421245 - type: precision value: 33.57371794871795 - type: recall value: 43.269230769230774 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (urd-eng) config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.70666666666666 - type: precision value: 83.23166666666665 - type: recall value: 88.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ara-eng) config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.4 - type: f1 value: 72.54666666666667 - type: precision value: 70.54318181818181 - type: recall value: 77.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kor-eng) config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.60000000000001 - type: f1 value: 74.1588888888889 - type: precision value: 72.30250000000001 - type: recall value: 78.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yid-eng) config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.40566037735849 - type: f1 value: 66.82587328813744 - type: precision value: 64.75039308176099 - type: recall value: 72.40566037735849 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fin-eng) config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.8 - type: f1 value: 68.56357142857144 - type: precision value: 66.3178822055138 - type: recall value: 73.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tha-eng) config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.78832116788321 - type: f1 value: 89.3552311435523 - type: precision value: 88.20559610705597 - type: recall value: 91.78832116788321 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (wuu-eng) config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.05085581085581 - type: precision value: 66.955 - type: recall value: 74.3 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.896 - type: map_at_10 value: 8.993 - type: map_at_100 value: 14.133999999999999 - type: map_at_1000 value: 15.668000000000001 - type: map_at_3 value: 5.862 - type: map_at_5 value: 7.17 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 42.931000000000004 - type: mrr_at_100 value: 44.81 - type: mrr_at_1000 value: 44.81 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.701 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 21.163 - type: ndcg_at_100 value: 33.306000000000004 - type: ndcg_at_1000 value: 45.275999999999996 - type: ndcg_at_3 value: 25.685999999999996 - type: ndcg_at_5 value: 23.732 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 17.755000000000003 - type: precision_at_100 value: 6.938999999999999 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 2.896 - type: recall_at_10 value: 13.333999999999998 - type: recall_at_100 value: 43.517 - type: recall_at_1000 value: 79.836 - type: recall_at_3 value: 6.306000000000001 - type: recall_at_5 value: 8.825 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.3874 - type: ap value: 13.829909072469423 - type: f1 value: 53.54534203543492 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 62.62026032823995 - type: f1 value: 62.85251350485221 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 33.21527881409797 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.97943613280086 - type: cos_sim_ap value: 70.75454316885921 - type: cos_sim_f1 value: 65.38274012676743 - type: cos_sim_precision value: 60.761214318078835 - type: cos_sim_recall value: 70.76517150395777 - type: dot_accuracy value: 79.0546581629612 - type: dot_ap value: 47.3197121792147 - type: dot_f1 value: 49.20106524633821 - type: dot_precision value: 42.45499808502489 - type: dot_recall value: 58.49604221635884 - type: euclidean_accuracy value: 85.08076533349228 - type: euclidean_ap value: 70.95016106374474 - type: euclidean_f1 value: 65.43987900176455 - type: euclidean_precision value: 62.64478764478765 - type: euclidean_recall value: 68.49604221635884 - type: manhattan_accuracy value: 84.93771234428085 - type: manhattan_ap value: 70.63668388755362 - type: manhattan_f1 value: 65.23895401262398 - type: manhattan_precision value: 56.946084218811485 - type: manhattan_recall value: 76.35883905013192 - type: max_accuracy value: 85.08076533349228 - type: max_ap value: 70.95016106374474 - type: max_f1 value: 65.43987900176455 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.69096130709822 - type: cos_sim_ap value: 84.82526278228542 - type: cos_sim_f1 value: 77.65485060585536 - type: cos_sim_precision value: 75.94582658619167 - type: cos_sim_recall value: 79.44256236526024 - type: dot_accuracy value: 80.97954748321496 - type: dot_ap value: 64.81642914145866 - type: dot_f1 value: 60.631996987229975 - type: dot_precision value: 54.5897293631712 - type: dot_recall value: 68.17831844779796 - type: euclidean_accuracy value: 88.6987231730508 - type: euclidean_ap value: 84.80003825477253 - type: euclidean_f1 value: 77.67194179854496 - type: euclidean_precision value: 75.7128235122094 - type: euclidean_recall value: 79.73514012935017 - type: manhattan_accuracy value: 88.62692591298949 - type: manhattan_ap value: 84.80451408255276 - type: manhattan_f1 value: 77.69888949572183 - type: manhattan_precision value: 73.70311528631622 - type: manhattan_recall value: 82.15275639051433 - type: max_accuracy value: 88.6987231730508 - type: max_ap value: 84.82526278228542 - type: max_f1 value: 77.69888949572183 language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit --- ## Multilingual-E5-small [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small') model = AutoModel.from_pretrained('intfloat/multilingual-e5-small') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-small') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
nikcheerla/amd-full-v1
nikcheerla
"2024-01-08T23:49:53Z"
1,553
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "region:us" ]
text-classification
"2024-01-08T23:49:34Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: 'Your call has been forwarded to an automated voice messaging system. 9 ' - text: 'Your call has been forwarded to an automatic voice message system. 7133 ' - text: 'Triage Tronic Industries is not available. Record your message at the tone. ' - text: 'Hi. This is Sid. I''m sorry I missed your call. Please leave me your name and number, and I will get back to you as soon as I can. Thank you, and have ' - text: 'The Google subscriber you have called is not available. Please leave a message after the tone. ' pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:--------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | machine | <ul><li>'Sorry. David Hello. Is not avail '</li><li>'To Mozaz. Please wait as we try to connect you. '</li><li>'Your call has been forwarded to an automated voice messaging system. 2 0 '</li></ul> | | human | <ul><li>'Good afternoon. Sesame Workshop. How can I help you today? '</li><li>'This is Kenny. '</li><li>'Hello? '</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("nikcheerla/amd-full-v1") # Run inference preds = model("Your call has been forwarded to an automated voice messaging system. 9 ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 14.6725 | 207 | | Label | Training Sample Count | |:--------|:----------------------| | human | 1495 | | machine | 6401 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: True - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:---------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.197 | - | | 1.0 | 9870 | 0.0001 | 0.0271 | | 2.0 | 19740 | 0.0 | 0.0272 | | **3.0** | **29610** | **0.0** | **0.0264** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.0.1+cu118 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
iremmd/thy_model_33
iremmd
"2024-06-28T17:53:50Z"
1,553
0
null
[ "gguf", "region:us" ]
null
"2024-06-28T17:41:33Z"
Entry not found
timm/convnext_small.in12k_ft_in1k
timm
"2024-02-10T23:29:49Z"
1,552
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-12k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
"2023-01-11T22:35:59Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-12k --- # Model card for convnext_small.in12k_ft_in1k A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 50.2 - GMACs: 8.7 - Activations (M): 21.6 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/huggingface/pytorch-image-models - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_small.in12k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.in12k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
arnavgrg/llama-2-7b-chat-nf4-fp16-upscaled
arnavgrg
"2023-12-12T19:06:37Z"
1,552
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-30T15:54:30Z"
--- license: apache-2.0 tags: - text-generation-inference --- This is an upscaled fp16 variant of the original Llama-2-7b-chat base model by Meta after it has been loaded with nf4 4-bit quantization via bitsandbytes. The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time. _Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._ To use this model, you can just load it via `transformers` in fp16: ```python import torch from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "arnavgrg/llama-2-7b-chat-nf4-fp16-upscaled", device_map="auto", torch_dtype=torch.float16 ) ```
FredrikBL/NeuralPipe-7B-slerp
FredrikBL
"2024-03-20T13:39:43Z"
1,552
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-06T13:19:29Z"
--- tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B license: apache-2.0 --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "FredrikBL/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF
pankaj217
"2024-06-27T08:51:27Z"
1,552
0
transformers
[ "transformers", "gguf", "code", "granite", "llama-cpp", "gguf-my-repo", "text-generation", "dataset:bigcode/commitpackft", "dataset:TIGER-Lab/MathInstruct", "dataset:meta-math/MetaMathQA", "dataset:glaiveai/glaive-code-assistant-v3", "dataset:glaive-function-calling-v2", "dataset:bugdaryan/sql-create-context-instruction", "dataset:garage-bAInd/Open-Platypus", "dataset:nvidia/HelpSteer", "base_model:ibm-granite/granite-8b-code-instruct", "license:apache-2.0", "model-index", "region:us" ]
text-generation
"2024-06-27T08:51:03Z"
--- base_model: ibm-granite/granite-8b-code-instruct datasets: - bigcode/commitpackft - TIGER-Lab/MathInstruct - meta-math/MetaMathQA - glaiveai/glaive-code-assistant-v3 - glaive-function-calling-v2 - bugdaryan/sql-create-context-instruction - garage-bAInd/Open-Platypus - nvidia/HelpSteer library_name: transformers license: apache-2.0 metrics: - code_eval pipeline_tag: text-generation tags: - code - granite - llama-cpp - gguf-my-repo inference: false model-index: - name: granite-8b-code-instruct results: - task: type: text-generation dataset: name: HumanEvalSynthesis(Python) type: bigcode/humanevalpack metrics: - type: pass@1 value: 57.9 name: pass@1 - type: pass@1 value: 52.4 name: pass@1 - type: pass@1 value: 58.5 name: pass@1 - type: pass@1 value: 43.3 name: pass@1 - type: pass@1 value: 48.2 name: pass@1 - type: pass@1 value: 37.2 name: pass@1 - type: pass@1 value: 53.0 name: pass@1 - type: pass@1 value: 42.7 name: pass@1 - type: pass@1 value: 52.4 name: pass@1 - type: pass@1 value: 36.6 name: pass@1 - type: pass@1 value: 43.9 name: pass@1 - type: pass@1 value: 16.5 name: pass@1 - type: pass@1 value: 39.6 name: pass@1 - type: pass@1 value: 40.9 name: pass@1 - type: pass@1 value: 48.2 name: pass@1 - type: pass@1 value: 41.5 name: pass@1 - type: pass@1 value: 39.0 name: pass@1 - type: pass@1 value: 32.9 name: pass@1 --- # pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`ibm-granite/granite-8b-code-instruct`](https://huggingface.co/ibm-granite/granite-8b-code-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ibm-granite/granite-8b-code-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF --hf-file granite-8b-code-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF --hf-file granite-8b-code-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF --hf-file granite-8b-code-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pankaj217/granite-8b-code-instruct-Q5_K_M-GGUF --hf-file granite-8b-code-instruct-q5_k_m.gguf -c 2048 ```
LargeWorldModel/LWM-Text-128K
LargeWorldModel
"2024-02-11T08:21:42Z"
1,551
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-27T04:42:07Z"
--- inference: false --- <br> <br> # LWM-Text-128K Model Card ## Model details **Model type:** LWM-Text-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LWM-Text-128K was trained in December 2023. **Paper or resources for more information:** https://largeworldmodel.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/LargeWorldModel/lwm/issues ## Training dataset - 92K subset of Books3 documents with 100K to 200K tokens
Niggendar/rsmpornxlEmbraceTheSuck_v081Beta
Niggendar
"2024-04-19T12:06:42Z"
1,551
1
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-04-19T12:01:10Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/L3-NA-Aethora-15B-i1-GGUF
mradermacher
"2024-06-08T01:18:57Z"
1,551
1
transformers
[ "transformers", "gguf", "en", "dataset:TheSkullery/Aether-Lite-V1.2", "base_model:TheSkullery/L3-NA-Aethora-15B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-07T22:55:16Z"
--- base_model: TheSkullery/L3-NA-Aethora-15B datasets: - TheSkullery/Aether-Lite-V1.2 language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TheSkullery/L3-NA-Aethora-15B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3-NA-Aethora-15B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q2_K.gguf) | i1-Q2_K | 5.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q4_0.gguf) | i1-Q4_0 | 8.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-NA-Aethora-15B-i1-GGUF/resolve/main/L3-NA-Aethora-15B.i1-Q6_K.gguf) | i1-Q6_K | 12.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
nikcheerla/amd-power-dialer-v1
nikcheerla
"2023-09-01T06:28:43Z"
1,548
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
"2023-08-25T19:04:32Z"
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # nikcheerla/amd-power-dialer-v1 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nikcheerla/amd-power-dialer-v1") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
nikcheerla/amd-full-phonetree-v1
nikcheerla
"2024-01-08T19:49:30Z"
1,548
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "region:us" ]
text-classification
"2024-01-08T19:49:15Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: 'I''m sorry. The person you are trying to reach has a voice mailbox that has not been set up yet. Please try your call ' - text: 'For calling WL Gore and Associates Incorporated. Please wait ' - text: 'Hello. Please state your name after the tone, and Google Voice will try ' - text: 'Thank you for calling Stanley Black and Decker. For the company directory, press 1. For investor relations, press 2. ' - text: 'Sorry. Chris Trent is not available. Record your message at the tone. When you are finished, hang up or press pound for more options. ' pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | voicemail | <ul><li>'Your call has been forwarded to an automated voice messaging system. 6 '</li><li>'Please leave your message for 8083526996. '</li><li>"This is Bart Jumper. I'm sorry I missed your call. Please leave your name and number, and I'll return your call as soon as I "</li></ul> | | phone_tree | <ul><li>'Thank you for calling Periton. A next '</li><li>'Thank you for calling Signifide. Our main number has changed. The new number is eight six six two '</li><li>'Thank you for calling Icahn Health and Fitness. If you know the extension you wish to reach, '</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("nikcheerla/amd-full-phonetree-v1") # Run inference preds = model("For calling WL Gore and Associates Incorporated. Please wait ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 14.7789 | 214 | | Label | Training Sample Count | |:-----------|:----------------------| | phone_tree | 4979 | | voicemail | 5519 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: True - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:---------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.2196 | - | | 1.0 | 13123 | 0.0001 | 0.1209 | | **2.0** | **26246** | **0.0** | **0.1101** | | 3.0 | 39369 | 0.0446 | 0.1108 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.0.1+cu118 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
LLM360/K2
LLM360
"2024-06-28T16:40:51Z"
1,548
73
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "llm", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-17T18:50:01Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation library_name: transformers tags: - nlp - llm --- # K2: a fully-reproducible large language model outperforming Llama 2 70B using 35% less compute LLM360 demystifies the training recipe used for Llama 2 70B with K2. K2 is fully transparent, meaning we’ve open-sourced all artifacts, including code, data, model checkpoints, intermediate results, and more. <center><img src="k2_eval_table.png" alt="k2 eval table" /></center> ## About K2: * 65 billion parameter LLM * Tokens: 1.4T * Languages: English * Models Released: base, chat model * Trained in 2 stages * License: Apache 2.0 K2 was developed as a collaboration between [MBZUAI](https://mbzuai.ac.ae/institute-of-foundation-models/), [Petuum](https://www.petuum.com/), and [LLM360](https://www.llm360.ai/). ## LLM360 Model Performance and Evaluation Collection The LLM360 Performance and Evaluation Collection is a robust evaluations set consisting of general and domain specific evaluations to assess model knowledge and function. Evaluations include standard best practice benchmarks, medical, math, and coding knowledge. More about the evaluations can be found [here](https://www.llm360.ai/evaluation.html). <center><img src="k2_table_of_tables.png" alt="k2 big eval table"/></center> Detailed analysis can be found on the K2 Weights and Biases project [here](https://wandb.ai/llm360/K2?nw=29mu6l0zzqq) ## K2 Gallery The K2 gallery allows one to browse the output of various prompts on intermediate K2 checkpoints, which provides an intuitive understanding on how the model develops and improves over time. This is inspired by The Bloom Book. [View K2 gallery here](https://huggingface.co/spaces/LLM360/k2-gallery) ## Datasets and Mix The following data mix was used to train K2 and achieve results in line with Llama 2 70B. The full data sequence can be found [here](https://huggingface.co/datasets/LLM360/K2Datasets/tree/main) | Dataset | Starting Tokens | Multiplier | Total Tokens |% of Total | | ----------- | ----------- | ----------- | ----------- | ----------- | | dm-math | 4.33B | 3x | 13B | 1% | | pubmed-abstracts | 4.77B | 3x | 14.3B | 1.1% | | uspto | 4.77B | 3x | 14.3B | 1.1% | | pubmed-central | 26B | 1x | 26B | 2% | | [redpajama.arxiv](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 27.3B | 1x | 27.3B | 2.1% | | [starcoder.spm](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% | | [starcoder.fim](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% | | [redpajama.stackexchange](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 61.1B | 1x | 61.1B | 4.7% | | [starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 132.6B | 0.5x | 66.3B | 5.1% | | [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) | 76.7B | 1x | 76.7B | 5.9% | | [redpajama.book](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 80.6B | 1x | 80.6B | 6.2% | | s2orc | 107.9B | 1x | 107.9B | 8.3% | | [redpajama.wikipedia](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 22.1B | 6x | 132.6B | 10.2% | | [refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 612.3B | 1x | 612.3B | 47.1% | | Totals | - | - | 1.3T | 100% | # LLM360 Reasearch Suite ## Stage 2 - Last 10 Checkpoints | Checkpoints | | | ----------- | ----------- | | [Checkpoint 380](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_380) | [Checkpoint 375](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_375) | | [Checkpoint 379](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_379) | [Checkpoint 374](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_374) | | [Checkpoint 378](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_378) | [Checkpoint 373](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_373) | | [Checkpoint 377](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_377) | [Checkpoint 372](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_372) | | [Checkpoint 376](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_376) | [Checkpoint 371](https://huggingface.co/LLM360/K2/tree/ministage2_ckpt_371) | ## Stage 1 - Last 10 Checkpoints | Checkpoints | | | ----------- | ----------- | | [Checkpoint 360](https://huggingface.co/LLM360/K2/tree/ckpt_360) | [Checkpoint 355](https://huggingface.co/LLM360/K2/tree/ckpt_355) | | [Checkpoint 359](https://huggingface.co/LLM360/K2/tree/ckpt_359) | [Checkpoint 354](https://huggingface.co/LLM360/K2/tree/ckpt_354) | | [Checkpoint 358](https://huggingface.co/LLM360/K2/tree/ckpt_358) | [Checkpoint 353](https://huggingface.co/LLM360/K2/tree/ckpt_353) | | [Checkpoint 357](https://huggingface.co/LLM360/K2/tree/ckpt_357) | [Checkpoint 352](https://huggingface.co/LLM360/K2/tree/ckpt_352) | | [Checkpoint 356](https://huggingface.co/LLM360/K2/tree/ckpt_356) | [Checkpoint 351](https://huggingface.co/LLM360/K2/tree/ckpt_351) | [to find all branches: git branch -a] ## LLM360 Pretraining Suite We provide step-by-step reproducation tutorials for tech enthusiasts, AI practitioners and academic or industry researchers who want to learn pretraining techniques [here](https://www.llm360.ai/pretraining.html). ## LLM360 Developer Suite We provide step-by-step finetuning tutorials for tech enthusiasts, AI practitioners and academic or industry researchers [here](https://www.llm360.ai/developer.html). # Loading K2 ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("LLM360/K2") model = AutoModelForCausalLM.from_pretrained("LLM360/K2") prompt = 'what is the highest mountain on earth?' input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, max_new_tokens=128) print("-"*20 + "Output for model" + 20 * '-') print(tokenizer.batch_decode(gen_tokens)[0]) ``` ## About LLM360 LLM360 is an open research lab enabling community-owned AGI through open-source large model research and development. LLM360 enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development. We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators. [Visit us](https://www.llm360.ai/) ## Citation **BibTeX:** ```bibtex @article{K2, title={LLM360 K2-65B: Scaling Up Fully Transparent Open-Source LLMs}, author={The LLM360 Team}, year={2024}, } ```
legraphista/K2-ckpt_360-IMat-GGUF
legraphista
"2024-06-01T12:34:45Z"
1,548
0
gguf
[ "gguf", "nlp", "llm", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:LLM360/K2", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-31T22:55:43Z"
--- base_model: LLM360/K2 inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - nlp - llm - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # K2-ckpt_360-IMat-GGUF _Llama.cpp imatrix quantization of LLM360/K2_ Original Model: [LLM360/K2](https://huggingface.co/LLM360/K2/tree/ckpt_360) branch `ckpt_360` Original dtype: `FP16` (`float16`) Quantized by: llama.cpp [b3058](https://github.com/ggerganov/llama.cpp/releases/tag/b3058) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [K2-ckpt_360.Q8_0/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.Q8_0) | Q8_0 | 69.37GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q6_K/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.Q6_K) | Q6_K | 53.56GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q4_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q4_K.gguf) | Q4_K | 39.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q3_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q3_K.gguf) | Q3_K | 31.63GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q2_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q2_K.gguf) | Q2_K | 24.11GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [K2-ckpt_360.FP16/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.FP16) | F16 | 130.58GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q8_0/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.Q8_0) | Q8_0 | 69.37GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q6_K/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.Q6_K) | Q6_K | 53.56GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q5_K/*](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/tree/main/K2-ckpt_360.Q5_K) | Q5_K | 46.24GB | ✅ Available | ⚪ Static | ✂ Yes | [K2-ckpt_360.Q5_K_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q5_K_S.gguf) | Q5_K_S | 44.92GB | ✅ Available | ⚪ Static | 📦 No | [K2-ckpt_360.Q4_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q4_K.gguf) | Q4_K | 39.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q4_K_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q4_K_S.gguf) | Q4_K_S | 37.06GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ4_NL.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ4_NL.gguf) | IQ4_NL | 36.80GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ4_XS.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ4_XS.gguf) | IQ4_XS | 34.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q3_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q3_K.gguf) | Q3_K | 31.63GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q3_K_L.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q3_K_L.gguf) | Q3_K_L | 34.65GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q3_K_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q3_K_S.gguf) | Q3_K_S | 28.16GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ3_M.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ3_M.gguf) | IQ3_M | 29.83GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ3_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ3_S.gguf) | IQ3_S | 28.16GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ3_XS.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ3_XS.gguf) | IQ3_XS | 26.64GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ3_XXS.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ3_XXS.gguf) | IQ3_XXS | 24.67GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q2_K.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q2_K.gguf) | Q2_K | 24.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.Q2_K_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.Q2_K_S.gguf) | Q2_K_S | 21.98GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ2_M.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ2_M.gguf) | IQ2_M | 22.41GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ2_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ2_S.gguf) | IQ2_S | 20.78GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ2_XS.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ2_XS.gguf) | IQ2_XS | 19.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ2_XXS.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ2_XXS.gguf) | IQ2_XXS | 17.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ1_M.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ1_M.gguf) | IQ1_M | 15.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2-ckpt_360.IQ1_S.gguf](https://huggingface.co/legraphista/K2-ckpt_360-IMat-GGUF/blob/main/K2-ckpt_360.IQ1_S.gguf) | IQ1_S | 14.21GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/K2-ckpt_360-IMat-GGUF --include "K2-ckpt_360.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/K2-ckpt_360-IMat-GGUF --include "K2-ckpt_360.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Llama.cpp ``` llama.cpp/main -m K2-ckpt_360.Q8_0.gguf --color -i -p "prompt here" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `K2-ckpt_360.Q8_0`) 3. Run `gguf-split --merge K2-ckpt_360.Q8_0/K2-ckpt_360.Q8_0-00001-of-XXXXX.gguf K2-ckpt_360.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
KoboldAI/LLaMA2-13B-Estopia-GGUF
KoboldAI
"2024-01-14T15:33:04Z"
1,547
9
null
[ "gguf", "mergekit", "merge", "base_model:TheBloke/Llama-2-13B-fp16", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-01-14T14:27:09Z"
--- base_model: - TheBloke/Llama-2-13B-fp16 tags: - mergekit - merge license: cc-by-nc-4.0 --- This is the GGUF version of Estopia recommended to be used with [Koboldcpp](https://koboldai.org/cpp) which is an easy to use and very versitile GGUF compatible program. With Koboldcpp you will be able to instruct write and co-write with this model in the instruct and story writing modes, It is compatibility with your character cards in its KoboldAI Lite UI and has wide API support for all popular frontends.. # Introduction - Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases. - It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details. - It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided. - It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second. # Usage - The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below: - ```\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...``` - ```<Story text>\n***\nWrite a summary of the text above\n***\nThe story starts by...``` - Using the Kobold Lite AI adventure mode - ```User:Hello there!\nAssistant:Good morning...\n``` - For settings, the following are recommended for general use: - Temperature: 0.8-1.2 - Min P: 0.05-0.1 - Max P: 0.92, or 1 if using a Min P greater than 0 - Top K: 0 - Response length: Higher than your usual amount most likely - for example a common value selected is 512. - Note: Response lengths are not guaranteed to always be this length. On occasion, responses may be shorter if they convey the response entirely, other times they could be upwards of this value. It depends mostly on the character card, instructions, etc. - Rep Pen: 1.1 - Rep Pen Range: 2 or 3x your response length - Stopping tokens (Not needed, but can help if the AI is writing too much): - ```##||$||---||$||ASSISTANT:||$||[End||$||</s>``` - A single string for Kobold Lite combining the ones below - ```##``` - ```---``` - ```ASSISTANT:``` - ```[End``` - ```</s>``` - The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI. # Recipe This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better. ```yaml merge_method: task_arithmetic base_model: TheBloke/Llama-2-13B-fp16 models: - model: TheBloke/Llama-2-13B-fp16 - model: Undi95/UtopiaXL-13B parameters: weight: 1.0 - model: Doctor-Shotgun/cat-v1.0-13b parameters: weight: 0.02 - model: PygmalionAI/mythalion-13b parameters: weight: 0.10 - model: Undi95/Emerhyst-13B parameters: weight: 0.05 - model: CalderaAI/13B-Thorns-l2 parameters: weight: 0.05 - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.20 dtype: float16 ``` The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9. ```yaml merge_method: task_arithmetic base_model: TheBloke/Llama-2-13B-fp16 models: - model: TheBloke/Llama-2-13B-fp16 - model: Undi95/UtopiaXL-13B parameters: weight: 1.0 - model: Doctor-Shotgun/cat-v1.0-13b parameters: weight: 0.01 - model: chargoddard/rpguild-chatml-13b parameters: weight: 0.02 - model: PygmalionAI/mythalion-13b parameters: weight: 0.08 - model: CalderaAI/13B-Thorns-l2 parameters: weight: 0.02 - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.20 dtype: float16 ``` The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique. ```yaml # task-arithmetic style models: - model: EstopiaV9 parameters: weight: 1 density: 1 - model: EstopiaV13 parameters: weight: 0.05 density: 0.30 merge_method: dare_ties base_model: TheBloke/Llama-2-13B-fp16 parameters: int8_mask: true dtype: bfloat16 ``` # Model selection - Undi95/UtopiaXL-13B - Solid all around base for models, with the ability to write longer responses and generally good retension to detail. - Doctor-Shotgun/cat-v1.0-13b - A medical focused model which is added to focus a little more on the human responses, such as for psycology. - PygmalionAI/mythalion-13b - A roleplay and instruct focused model, which improves attentiveness to character card details and the variety of responses - Undi95/Emerhyst-13B - A roleplay but also longer form response model. It can be quite variable, but helps add to the depth and possible options the AI can respond with during narratives. - CalderaAI/13B-Thorns-l2 - A neutral and very attentive model. It is good at chat and following instructions, which help benefit these modes. - KoboldAI/LLaMA2-13B-Tiefighter - A solid all around model, focusing on story writing and adventure modes. It provides all around benefits to creativity and the prose in models, along with adventure mode support. - chargoddard/rpguild-chatml-13b - A roleplay model, which introduces new data and also improves the detail retention in longer narratives. # Notes - With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it. - Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant. - This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead. # Future areas - Llava - Some success has been had with merging the llava lora on this. While no in depth testing has been performed, more narrative responses based on the images could be obtained - though there were drawbacks in the form of degraded performance in other areas, and hallucinations due to the fictional focus of this model. - Stheno - A merge which has similar promise from Sao. Some merge attempts have been made between the two and were promising, but not entirely consistent at the moment. With some possible refinement, this could produce an even stronger model. - DynamicFactor - All the merges used have been based on llama two in this merge, but a dare merge with dynamic factor (an attempted refinement of llama two) showed a beneficial improvement to the instruction abilities of the model, along with lengthy responses. It lost a little of the variety of responses, so perhaps if a balance of it could be added the instruction abilities and reasoning could be improved even further.
bartowski/llama3-13.45b-Instruct-GGUF
bartowski
"2024-06-10T18:13:40Z"
1,547
0
transformers
[ "transformers", "gguf", "pytorch", "llama", "llama-3", "mergekit", "merge", "text-generation", "en", "base_model:Meta-Llama-3-8B-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T17:44:23Z"
--- base_model: [Meta-Llama-3-8B-Instruct] library_name: transformers language: - en pipeline_tag: text-generation tags: - pytorch - llama - llama-3 - mergekit - merge license: llama3 quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of llama3-13.45b-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization. Original model: https://huggingface.co/win10/llama3-13.45b-Instruct All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [llama3-13.45b-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q8_0.gguf) | Q8_0 | 14.10GB | Extremely high quality, generally unneeded but max available quant. | | [llama3-13.45b-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q6_K.gguf) | Q6_K | 10.89GB | Very high quality, near perfect, *recommended*. | | [llama3-13.45b-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q5_K_M.gguf) | Q5_K_M | 9.43GB | High quality, *recommended*. | | [llama3-13.45b-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q5_K_S.gguf) | Q5_K_S | 9.19GB | High quality, *recommended*. | | [llama3-13.45b-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q4_K_M.gguf) | Q4_K_M | 8.06GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [llama3-13.45b-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q4_K_S.gguf) | Q4_K_S | 7.65GB | Slightly lower quality with more space savings, *recommended*. | | [llama3-13.45b-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ4_XS.gguf) | IQ4_XS | 7.24GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [llama3-13.45b-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q3_K_L.gguf) | Q3_K_L | 7.06GB | Lower quality but usable, good for low RAM availability. | | [llama3-13.45b-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q3_K_M.gguf) | Q3_K_M | 6.53GB | Even lower quality. | | [llama3-13.45b-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ3_M.gguf) | IQ3_M | 6.12GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [llama3-13.45b-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q3_K_S.gguf) | Q3_K_S | 5.91GB | Low quality, not recommended. | | [llama3-13.45b-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ3_XS.gguf) | IQ3_XS | 5.65GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [llama3-13.45b-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 5.28GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [llama3-13.45b-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-Q2_K.gguf) | Q2_K | 5.10GB | Very low quality but surprisingly usable. | | [llama3-13.45b-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ2_M.gguf) | IQ2_M | 4.71GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [llama3-13.45b-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ2_S.gguf) | IQ2_S | 4.38GB | Very low quality, uses SOTA techniques to be usable. | | [llama3-13.45b-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/llama3-13.45b-Instruct-GGUF/blob/main/llama3-13.45b-Instruct-IQ2_XS.gguf) | IQ2_XS | 4.15GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/llama3-13.45b-Instruct-GGUF --include "llama3-13.45b-Instruct-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/llama3-13.45b-Instruct-GGUF --include "llama3-13.45b-Instruct-Q8_0.gguf/*" --local-dir llama3-13.45b-Instruct-Q8_0 ``` You can either specify a new local-dir (llama3-13.45b-Instruct-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
facebook/fastspeech2-en-ljspeech
facebook
"2022-01-28T23:25:24Z"
1,546
248
fairseq
[ "fairseq", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:2006.04558", "arxiv:2109.06912", "region:us" ]
text-to-speech
"2022-03-02T23:29:05Z"
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: en datasets: - ljspeech widget: - text: "Hello, this is a test run." example_title: "Hello, this is a test run." --- # fastspeech2-en-ljspeech [FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - Single-speaker female voice - Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
pranavpsv/gpt2-genre-story-generator
pranavpsv
"2021-05-23T11:02:06Z"
1,546
44
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
# GPT2 Genre Based Story Generator ## Model description GPT2 fine-tuned on genre-based story generation. ## Intended uses Used to generate stories based on user inputted genre and starting prompts. ## How to use #### Supported Genres superhero, action, drama, horror, thriller, sci_fi #### Input text format \<BOS> \<genre> Some optional text... **Example**: \<BOS> \<sci_fi> After discovering time travel, ```python # Example of usage from transformers import pipeline story_gen = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator") print(story_gen("<BOS> <superhero> Batman")) ``` ## Training data Initialized with pre-trained weights of "gpt2" checkpoint. Fine-tuned the model on stories of various genres.
l3cube-pune/hindi-sentence-similarity-sbert
l3cube-pune
"2023-10-22T07:28:47Z"
1,546
4
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "hi", "arxiv:2211.11187", "arxiv:2304.11434", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-11-05T18:56:46Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: cc-by-4.0 language: hi widget: - source_sentence: "एक आदमी एक रस्सी पर चढ़ रहा है" sentences: - "एक आदमी एक रस्सी पर चढ़ता है" - "एक आदमी एक दीवार पर चढ़ रहा है" - "एक आदमी बांसुरी बजाता है" example_title: "Example 1" - source_sentence: "कुछ लोग गा रहे हैं" sentences: - "लोगों का एक समूह गाता है" - "बिल्ली दूध पी रही है" - "दो आदमी लड़ रहे हैं" example_title: "Example 2" - source_sentence: "फेडरर ने 7वां विंबलडन खिताब जीत लिया है" sentences: - "फेडरर अपने करियर में कुल 20 ग्रैंडस्लैम खिताब जीत चुके है " - "फेडरर ने सितंबर में अपने निवृत्ति की घोषणा की" - "एक आदमी कुछ खाना पकाने का तेल एक बर्तन में डालता है" example_title: "Example 3" --- # HindSBERT-STS This is a HindSBERT model (<a href = 'https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> l3cube-pune/hindi-sentence-bert-nli </a>) fine-tuned on the STS dataset. <br> Released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br> A multilingual version of this model supporting major Indic languages and cross-lingual sentence similarity is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> indic-sentence-similarity-sbert </a> <br> More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187) ``` @article{joshi2022l3cubemahasbert, title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi}, author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11187}, year={2022} } ``` <a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br> <a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a> Other Monolingual similarity models are listed below: <br> <a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br> <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br> Other Monolingual Indic sentence BERT models are listed below: <br> <a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br> <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br> This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ```
ignos/Mistral-T5-7B-v1
ignos
"2023-12-18T19:04:38Z"
1,546
7
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-18T18:09:44Z"
--- license: apache-2.0 --- # Model Card for Model ID This model is a finetuning of [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) ## Model Details ### Model Description - **Developed by:** Ignos - **Model type:** Mistral - **License:** Apache-2.0 ## Uses Model created to improve instructional behavior. ## Bias, Risks, and Limitations The same bias, risks and limitations from base models. ## Training Details ### Training Data - [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) ### Training Procedure - Training with QLoRA approach and merging with base model. ### Results - Huggingface evaluation pending #### Summary ## Technical Specifications ### Model Architecture and Objective - Models based on Mistral Architecture ### Compute Infrastructure - Training on RunPod #### Hardware - 3 x RTX 4090 - 48 vCPU 377 GB RAM #### Software - Axolotl 0.3.0 ### Framework versions - PEFT 0.6.0
TheBloke/DiscoLM_German_7b_v1-GGUF
TheBloke
"2024-01-24T21:24:01Z"
1,546
25
transformers
[ "transformers", "gguf", "mistral", "Mistral", "finetune", "chatml", "DPO", "German", "Deutsch", "synthetic data", "de", "en", "base_model:DiscoResearch/DiscoLM_German_7b_v1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2024-01-18T20:12:06Z"
--- base_model: DiscoResearch/DiscoLM_German_7b_v1 inference: false language: - de - en license: apache-2.0 model-index: - name: DiscoLM_German_7b_v1 results: [] model_creator: Disco Research model_name: DiscoLM German 7B v1 model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - Mistral - finetune - chatml - DPO - German - Deutsch - synthetic data --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # DiscoLM German 7B v1 - GGUF - Model creator: [Disco Research](https://huggingface.co/DiscoResearch) - Original model: [DiscoLM German 7B v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) <!-- description start --> ## Description This repo contains GGUF format model files for [Disco Research's DiscoLM German 7B v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) * [Disco Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [discolm_german_7b_v1.Q2_K.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes | | [discolm_german_7b_v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [discolm_german_7b_v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [discolm_german_7b_v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [discolm_german_7b_v1.Q4_0.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [discolm_german_7b_v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [discolm_german_7b_v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [discolm_german_7b_v1.Q5_0.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [discolm_german_7b_v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [discolm_german_7b_v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [discolm_german_7b_v1.Q6_K.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [discolm_german_7b_v1.Q8_0.gguf](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF/blob/main/discolm_german_7b_v1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/DiscoLM_German_7b_v1-GGUF and below it, a specific filename to download, such as: discolm_german_7b_v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/DiscoLM_German_7b_v1-GGUF discolm_german_7b_v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/DiscoLM_German_7b_v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DiscoLM_German_7b_v1-GGUF discolm_german_7b_v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m discolm_german_7b_v1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./discolm_german_7b_v1.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./discolm_german_7b_v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Disco Research's DiscoLM German 7B v1 # DiscoLM German 7b v1 ![DiscoLM_Logo](discolm_german.png) ## Table of Contents 1. [Introduction](#introduction) 2. [Demo](#demo) 3. [Downloads](#Downloads) 4. [Prompt Format](#prompt-format) 5. [Results](#results) 6. [Evaluation](#evaluation) 7. [Dataset](#dataset) 8. [Limitations & Biases](#limitations--biases) 9. [Acknowledgements](#acknowledgements) 10. [About DiscoResearch](#about-discoresearch) 11. [Disclaimer](#disclaimer) # Introduction **DiscoLM German 7b** is a Mistral-based large language model with a focus on German-language applications and the successor of the [EM German](https://huggingface.co/jphme/em_german_leo_mistral) model family. It was trained on a large dataset of instructions in German and English with a SFT finetuning phase followed by additional DPO reinforcement learning. The model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content while preserving its fluency in English and excelling at translation tasks. Our goal with Disco LM German was not to beat benchmarks, but to provide a robust and reliable model for everyday use that can serve as a drop-in replacement for ChatGPT and other proprietary models. We find that the perceived quality of it´s German-language output is even higher than GPT-4 in many cases; however it won't compete with larger models and top English 7b models for very complex reasoning, math or coding tasks. # Demo Please find a Demo and try the model at [demo.discoresearch.org](https://demo.discoresearch.org/) (in case the Demo is down and you have questions, you can contact us on our [Discord](https://discord.gg/ttNdas89f3)). # Downloads ## Model Links We will update the links as soon as the quants are available on HuggingFace. | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | DiscoLM German 7b v1 | [Link](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GPTQ) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-AWQ) | # Prompt Format DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. ``` <|im_start|>system Du bist ein hilfreicher Assistent.<|im_end|> <|im_start|>user Wer bist du?<|im_end|> <|im_start|>assistant Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": "Wer bist du?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. ## Retrieval Format You can use a special retrieval format to improve steerability and reduce hallucinations for RAG applications (but other, more default formats should also work, this is purely optional) Example: ``` ### System: Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den Tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINSTRUCTION zu finden. Beantworte diese ausschließlich mit Informationen aus den gegebenen Quellen und gebe die Information zur genutzten Quelle unter "Quelle:" an. Sollten die Quellen keine relevanten Informationen enthalten, antworte: "Mit den gegebenen Informationen ist diese Frage nicht zu beantworten." ### User Prompt: BEGININPUT BEGINCONTEXT url: https://this.is.fake.news time: 2089-09-01 ENDCONTEXT Buxtehude ist die größte Stadt Deutschlands mit 96.56 Millionen Einwohnern. ENDINPUT BEGININSTRUCTION Was ist die größte deutsche Stadt? ENDINSTRUCTION ### Model Answer: Die größte deutsche Stadt ist Buxtehude. Quelle: url: https://this.is.fake.news time: 2089-09-01 ``` ## Function Calling The model also supports structured outputs/function calling, albeit this is a very experimental feature and YMMV. This will be improved in the future. The model will prefix functioncalls with `<functioncall>` and you can provide results in response with `<functionresponse>` for Multi-Turn applications. Example: ``` ### System: Du bist ein hilfreicher Assistent. Extrahiere alle Personen aus den Eingaben des Users. Du hast Zugriff auf folgende Funktionen: {'name': 'PersonList', 'description': 'Extrahiere die Namen aller im Text vorkommenden Personen', 'parameters': {'$defs': {'Person': {'description': 'Details über eine person', 'properties': {'name': {'title': 'Name', 'type': 'string'}, 'job': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Job'}, 'age': {'anyOf': [{'type': 'integer'}, {'type': 'null'}], 'title': 'Age'}}, 'required': ['name', 'job', 'age'], 'title': 'Person', 'type': 'object'}}, 'properties': {'person_list': {'items': {'$ref': '#/$defs/Person'}, 'title': 'Person List', 'type': 'array'}}, 'required': ['person_list'], 'type': 'object'}} ### User Prompt: Björn (25) und Jan sind die Gründer von ellamind. ### Model Answer: <functioncall> {"name": "PersonList", "arguments": '{"person_list": ["{"name": "Björn", "job": "founder", "age": 25}, {"name": "Jan", "job": "founder", "age": null}]}'} ``` # Results -to follow - # Evaluation As written above, we believe that current benchmarks don't capture the full spectrum of LLM capabilities very well. We didn't look at any benchmark results (besides training losses) until the work on DiscoLM was finished and didn't include any data resembling common benchmark formats in our training data. That said, preliminary results with a German version of MT Bench show promising results: While lacking for coding and extraxtion tasks, DiscoLM German 7b performs not far below GPT-3.5-turbo on many tasks and even singificantly outperforms it in the reasoning category. ![MTBench_DE_Results](mtbench_de_discolm_german_7b.png) Additional Benchmark results will follow. The biggest strength of this model (language quality as perceived by native speakers) can't yet be captured in a benchmark - please let us know if you have an idea how to change this! # Dataset The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications. # Limitations & Biases This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution. # Acknowledgements DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project led by [JP Harries](https://huggingface.co/jphme) and supported by [Björn Plüster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani). We thank [HessianAI](https://hessian.ai/) for providing compute & support for various DiscoResearch projects and our friends at [LAION](https://laion.ai) for their work on LeoLM and scientific adivce.** Development of DiscoLM German 7b was sponsored by **[ellamind](https://ellamind.com)**, where some of our founders are working on creating customized models for business applications with a focus on non-english language applications. Please get in contact if you need customized models for your business! [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # About DiscoResearch DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us! # Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place. <!-- original-model-card end -->
apple/MobileCLIP-S2-OpenCLIP
apple
"2024-06-12T11:47:34Z"
1,546
0
open_clip
[ "open_clip", "safetensors", "clip", "zero-shot-image-classification", "arxiv:2311.17049", "arxiv:2103.00020", "arxiv:2303.15343", "arxiv:2309.17425", "license:other", "region:us" ]
zero-shot-image-classification
"2024-06-07T14:48:32Z"
--- tags: - clip library_name: open_clip pipeline_tag: zero-shot-image-classification license: other license_name: apple-ascl license_link: LICENSE --- # MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training ](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel. This repository contains the **MobileCLIP-S2** checkpoint for OpenCLIP. ![MobileCLIP Performance Figure](fig_accuracy_latency.png) ### Highlights * Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller. * `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples. * `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020). ## Checkpoints | Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets | |:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:| | [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 | | [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 | | [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 | | [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 | | [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
dbddv01/gpt2-french-small
dbddv01
"2023-05-05T11:57:48Z"
1,545
7
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "french", "model", "fr", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: "fr" tags: - french - gpt2 - model --- A small french language model for french text generation (and possibly more NLP tasks...) **Introduction** This french gpt2 model is based on openai GPT-2 small model. It was trained on a <b>very small (190Mb) dataset </b> from french wikipedia using Transfer Learning and Fine-tuning techniques in just over a day, on one Colab pro with 1GPU 16GB. It was created applying the recept of <b>Pierre Guillou</b> See https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787 It is a proof-of-concept that makes possible to get a language model in any language with low ressources. It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used. It is now available on Hugging Face. For further information or requests, please go to "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)". Model migth be improved by using larger dataset under larger powerful training infrastructure. At least this one can be used for small finetuning experimentation (i.e with aitextgen). PS : I've lost the metrics but it speaks french with some minor grammar issues, coherence of text is somehow limited.
mradermacher/Meta-Llama-3-8B-Instruct-GGUF
mradermacher
"2024-05-05T14:47:50Z"
1,545
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-05-02T11:43:53Z"
--- base_model: NousResearch/Meta-Llama-3-8B-Instruct extra_gated_button_content: Submit extra_gated_fields: Affiliation: text ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox Country: country Date of birth: date_picker First Name: text Last Name: text geo: ip_location extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n \ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n \ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" language: - en library_name: transformers license: other license_link: LICENSE license_name: llama3 quantized_by: mradermacher tags: - facebook - meta - pytorch - llama - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Llama3-TAIDE-LX-8B-Chat-Alpha1-GGUF
QuantFactory
"2024-06-04T08:11:37Z"
1,545
1
null
[ "gguf", "text-generation", "base_model:taide/Llama3-TAIDE-LX-8B-Chat-Alpha1", "license:other", "region:us" ]
text-generation
"2024-06-02T13:47:19Z"
--- license: other license_name: llama3-taide-models-community-license-agreement license_link: https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view base_model: taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 pipeline_tag: text-generation --- # Llama3-TAIDE-LX-8B-Chat-Alpha1-GGUF This is quantized version of [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) created using llama.cpp # Model Description * The [TAIDE project](https://taide.tw/index) aims to develop a generative AI dialogue engine model that is tailored to the linguistic and cultural characteristics of Taiwan, while also establishing a trustworthy AI environment. By combining academic, industrial, and research resources, the project seeks to advance the development of trustworthy generative AI, enhancing Taiwan's international competitiveness, promoting industrial development, and reducing dependence on foreign technologies. * The Llama3 TAIDE series models are based on Meta's released [LLaMA3-8b model](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/), incorporating text and training materials from various fields in Taiwan to enhance the model's ability to respond in Traditional Chinese and perform specific tasks. The publicly released models are as follows: * [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): Based on LLaMA3-8b, continuous pretrained using Traditional Chinese data, and enhanced for office tasks and multi-turn dialogue capabilities through instruction tuning. Suitable for scenarios involving chat dialogue or task assistance. Llama3-TAIDE-LX-8B-Chat-Alpha1 also provides a [4-bit quantization model](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit). The quantization model is primarily offered for user convenience but may affect performance and introduce unforeseen issues. Users are advised to understand and take note of this. # Model Parameters * Parameters: 8B * Max context length: 8K * Training token in Traditional Chinese: 43B * Training time: 2336 H100 GPU Hours # Features * Strictly vetting training data for the model to improve its trustworthiness and applicability. * Enhancing the model's performance for tasks such as summarization, writing articles, writing letters, and translating between Chinese and English, which are commonly used in office settings. * Strengthening the model's understanding of Taiwan-specific culture, terminology, and context. * Equip the model with multi-turn conversation capabilities. # Examples | Task | User input | Model output | | --- | -------- | ------- | | Writing articles | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 一張舊照片,承載著時光的痕跡和回憶的溫度。我還記得那張照片是怎麼出現的,那是一張泛黃的黑白相片,裱在牆上,訴說著一段屬於我家族的故事。<br><br>那張照片中,我的祖父母站在田野間,後方是綿延的山巒,前方則是一片金黃的稻田。這幅景象令我印象深刻,因為這是我的童年回憶,也是我成長過程中最美好的風景線。<br><br>我出生在台灣鄉下,從小就生活在這樣一個純樸的環境中。每當我跟著家人在田裡工作時,就會看到這樣的美景。金黃的稻穗隨著微風搖曳,山巒則靜靜地矗立在遠方,給人一種安詳與祥和的感覺。<br><br>那張照片拍攝於民國50年代,當時台灣經濟尚未起飛,農業為主要產業。我的祖父母都是農民,他們辛勤耕種,為我們家提供溫飽。每當我看到那張照片,就會想起他們為我們所做的犧牲和努力。<br><br>除了美麗的風景外,照片中還有一段動人的故事。在我還很小的時候,家裡發生了一場大火,將我們的房子和所有財產都燒光了。幸好全家人都平安無事,但生活卻變得十分困苦。我們被迫搬到另一個村莊,在那裡重新開始生活。<br><br>那張照片也記錄了那段艱辛的歲月。雖然我們失去了許多東西,但家人之間的感情卻更加深厚。我們互相扶持,共同面對生活上的種種挑戰。<br><br>如今,我已長大成人,在都市裡工作。但每當我看到那張舊照片,我都會想起那段簡單而美好的時光。那些美麗的風景、親人的笑容,以及那段充滿挑戰卻又充滿希望的日子,都將永遠鐫刻在我的記憶深處。<br><br>這張舊照片,不僅僅是一張照片,更是一個寶貴的家族史,一個關於我們如何克服困難、珍惜現在的故事。我將把這張照片好好保存下來,讓後代也能夠認識我們的根源,並且銘記那些值得珍藏的美好回憶。 | | Writing letters | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | [您的姓名]<br>[您的職稱]<br>[您的公司名稱]<br>[郵件地址]<br>[電話號碼]<br>[發文日期]<br><br>[創意總監姓名]<br>[廣告公司名稱]<br>[廣告公司地址]<br><br>主旨:關於新產品宣傳活動創意發想,請查照。<br><br>尊敬的[創意總監姓名],<br><br>我希望這封信能找到你忙碌中好好休息。我寫這封信,是為了分享我對於我們即將推出的新產品宣傳活動一些創意發想。我深信,這些想法能幫助我們新產品在市場上脫穎而出,吸引消費者的目光。<br><br>首先,我建議我們可以利用社群媒體平台,像是Facebook、Instagram、YouTube等,製作一系列引人入勝的短片和貼文。這些內容可以展現新產品的特色和優點,同時也能傳達出我們品牌重視環保、健康的核心價值。我們可以與網紅或意見領袖合作,讓他們試用並分享他們的使用經驗,這將有助於增加我們產品的可信度和能見度。<br><br>其次,我們可以舉辦新產品試吃/試用活動,邀請民眾親身體驗我們的新產品。我們可以在各大百貨公司、超市、以及我們的門市舉辦這些活動,讓參與者免費試吃/試用新產品,並填寫問卷回饋意見。這種互動的方式,不僅能收集到寶貴的市場反饋,還能讓民眾對我們的品牌產生正面的印象。<br><br>第三,我們可以與當地餐飲業者合作,將我們的新產品納入他們的菜單中。這將使我們的產品接觸到更廣泛的客群,同時也能證明我們的產品在烹飪和美食產業中具有價值。<br><br>第四,我們可以舉辦一場新產品發表會,邀請媒體、業界人士、以及重要客戶參加。在這場活動中,我們可以展示新產品的各項特色和優勢,並由專業廚師示範如何將新產品融入各式料理中。這將是向大眾展示我們品牌創新精神和專業能力的絕佳機會。<br><br>最後,我建議我們可以製作一系列精美的包裝和禮盒,以吸引消費者在節慶和送禮時選擇我們的產品。精緻的包裝將能提升我們產品的價值感,同時也能傳達出我們品牌對細節和品質的堅持。<br><br>我深信,這些創意發想將有助於我們新產品在市場上獲得成功。若您對這些想法有任何意見或建議,請不吝與我分享。我期待著與您合作,共同為[您的公司名稱]的成長和成功而努力。<br><br>謝謝您抽空閱讀我的信,並期待著與您進一步討論這些想法。<br><br>敬禮<br><br>[您的姓名]<br>[您的職稱]<br>[您的公司名稱] | | Summarization | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 文章重點摘要如下:<br><br>- 「台版晶片法」或《產業創新條例第10條之2及第72條條文修正案》,旨在鼓勵關鍵技術創新企業,特別是那些在半導體、電動車、5G技術等領域且在國際供應鏈中扮演關鍵角色者。<br><br>- 企業若符合特定條件,包括當年度研發費用及研發密度達標,可享最高25%的營所稅抵減優惠。研發密度目標設為6%,而企業為取得抵減,必須證明其有效稅率達到特定比率,起初設定為112年的12%,113年提高至15%,但可能會依國際稅制的實施情形進行調整。<br><br>- 政府在制定政策時,考量到OECD最低稅負制的調整,因此會視情況調整有效稅率門檻。<br><br>- 政府部門已就法案內容與企業利益相關者進行協商,確定一些細節,例如企業投資先進製程設備的門檻設為100億元新台幣以供抵減。<br><br>- 研發費用門檻經過討論,可能從最初提議的100億元下調至60億至70億元之間,以激勵更多企業符合資格並投入研發,維持台灣在國際供應鏈中的關鍵地位。<br><br>- 經濟部官員表示,調降門檻的目的是讓更多企業覺得能夠達成標準,從而獲得應用租稅優惠的動力,並可能藉此提高在台投資金額,進而增加政府稅收。<br><br>- 由於IC設計業者的研發費用普遍較低,若門檻設得過高,符合條件的企業將很少,因此有必要調整。<br><br>- 在各國推動供應鏈自主化並增加對半導體產業的補助之際,經濟部正努力確保更多企業能從《產業創新條例》第10條之2中受益,以增強投資和鞏固台灣的技術地位。<br><br>- 政府強調租稅優惠的制定應符合獎勵目標,在此同時,也將繼續提供其他租稅優惠給在國內經營的企業,以促進台灣經濟成長。<br><br>- 最終的研發費用門檻將由經濟部和財政部共同確定,在預告相關子法後,將有約30天的時間供各界討論並提出調整建議,目標是讓這項新政策在6月上路。 | | Translation | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以獨特的位置,加上不斷精進的製作手法而聞名,風味更是讓許多咖啡癡趨之若鶩。 | # Usage * Sample code * The sample code and documentation will be released on GitHub later. * Prompt template * Normal QA ```python chat = [ {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * Replace {question} with user input * QA with system prompt ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * Replace {sys} with system prompt,ex:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * Replace {question} as user input * Multi turns conversation ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question1}"}, {"role": "assistant", "content": "{model_anwer_1}"}, {"role": "user", "content": "{question2}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * Replace {sys} with system prompt,ex:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * Replace {question1} with user input 1 * Replace {model_anwer_1} with model response 1 * Replace {question2} with user input 2 * For more details, please refer to the [Llama 3 documentation](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) # Training methods * Software / hardware spec * GPU: H100 * Training Framework: PyTorch * Data preprocessing * Character normalization * Deduplication * Denoise * Html tag、javascript in web content * Non-standard characters or garbage characters * Posts with an insufficient number of characters * Removing specific formats such as extra line breaks added for formatting purposes * Removing personal information such as emails and phone numbers. * Remove inappropriate content such as gambling, pornography, etc.. * Continuous pretraining (CP) * Supplementing the model with a large amount of reliable Traditional Chinese knowledge. * Hyper parameters * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * Fine tune (FT) * Enabling the model to answer questions in Traditional Chinese. * Hyper parameters * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # Training Data * Continuous pre-training data (about 140GB) | Dataset | Description | | --- | -------- | | Litigation Data | [Civil litigation data](https://judgment.judicial.gov.tw/FJUD/default.aspx) from various levels of courts in the judicial rulings, including data from 2013/01 to 2023/12. | | CNA news | The [CNA news](https://www.cna.com.tw/) includes daily news articles from June 1993 to June 2023, spanning a period of 30 years. The content covers various domains such as domestic and international politics, society, economy, culture, education, and lifestyle. | | ETtoday news | [ETtoday news](https://www.ettoday.net/) data, including data from 2011/10 to 2023/12. | | Legislative Yuan Gazette | The [Legislative Yuan Gazette](https://ppg.ly.gov.tw/ppg/) contains data from the 1st session of the 8th term to the 7th session of the 10th term. | | Publisher Website Book Introduction | Includes book introduction data from the websites of [SunColor](https://www.suncolor.com.tw/), [Gotop](https://www.gotop.com.tw/) publishers. | | Abstracts of GRB research projects | [GRB](https://www.grb.gov.tw/) is an information system that compiles research projects funded by government grants and their outcome reports. This dataset primarily includes research project abstracts from 1993 to 2023, including both Chinese and their English counterparts. | | Academic conference proceedings abstracts | The [database](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:) contains academic conference proceedings held in Taiwan from 1988 to 2009. | | Taiwan Panorama magazine | [Taiwan Panorama magazine](https://www.taiwan-panorama.com/) contains articles from July 1993 to June 2023, spanning 30 years. The content focuses on Taiwanese culture, tourism, and local customs. | | 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》covers approximately 187,000 academic terms in the humanities and social sciences, along with their translations. | | Data from various ministries and commissions | Including partial data from government department websites such as the Executive Yuan's "[National Overview](https://www.ey.gov.tw/state/)", the Ministry of Culture's "[National Cultural Memory Bank](https://memory.culture.tw/)", the National Development Council's "[Archives Support Teaching Network](https://art.archives.gov.tw/index.aspx)", the Ministry of Transportation's "[Traffic Safety Portal](https://168.motc.gov.tw/)", etc. | | Business Today | [Business Today](https://www.businesstoday.com.tw/) Magazine is a weekly magazine focused on finance. The dataset includes articles from 2008/01 to 2023/07. | | Mandarin and idiom dictionary from the Ministry of Education | Dataset including:<br>[Idiom Dictionary](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0): Contains 5,338 idioms, including definitions, original stories, usage explanations, and example sentences.<br>[Revised Mandarin Dictionary](https://dict.revised.moe.edu.tw/?la=0&powerMode=0): contains Chinese words and various vocabulary, including pronunciation, radicals, definitions, and other information, totaling approximately 165,539 entries.<br>[Concise Mandarin Dictionary](https://dict.concised.moe.edu.tw/?la=0&powerMode=0): is a condensed version of the "Revised Mandarin Dictionary", containing a total of 45,247 entries. | | SCITechVista | The dataset includes science news and popular science articles from the [SCITechVista](https://scitechvista.nat.gov.tw/) website. | | iKnow | The [iKnow](https://iknow.stpi.narl.org.tw/) platform provides information on market trends, strategic analysis, patent knowledge, and technology transaction information for Taiwan and the global technology industry. The dataset includes data from 2005/01 to 2023/07. | | Science Development Monthly Magazine | [Science Development Monthly Magazine](https://ejournal.stpi.narl.org.tw/sd) is a popular science publication published by the National Science Council (NSC) to promote science education. It includes articles from 2004/10 to 2020/12. In 2021, the magazine was relaunched as "[CharmingSCITech](https://www.charmingscitech.nat.gov.tw/)" quarterly, providing new knowledge on international technology issues. | | Legislation Database | The [Legislation Database](https://law.moj.gov.tw/) includes the latest central regulations, rules, draft bills, and local regulations issued by government agencies as of 2023/10. | | Local Government Tourism Websites | Covering partial data from tourism websites of local government counties and cities in Taiwan. | | Curriculum Guidelines from the National Institute of Education | The dataset includes curriculum guidelines for different subjects at various levels of education. | | CNA's English and Chinese Name Translation Database | The English and Chinese Name Translation Database of the Central News Agency (CNA) collects translations of foreign and Chinese surnames, personal names, organizations, and place names used in news. | | Fairy tales | A total of 20 fairy tale books, including "Tom Sawyer," "Peter Pan," "Alice's Adventures in Wonderland," "Uncle Long Legs," and more. | | RedPajama-Data-V2 | Extracting English data from the [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) multilingual dataset | | MathPile-commercial | A mathematics-focused dataset obtained from [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | Traditional Chinese Wikipedia Articles | The content of all articles in [Traditional Chinese Wikipedia](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91), up to January 2023. | | github-code-clean | An open-source code dataset on GitHub. After removing unlicensed code and documents. | * Fine tune data * The TAIDE team trains the LLaMA2 series models to generate fine-tuning data, which generates single or multi-turn conversations on topics such as world knowledge, creative writing, general knowledge, translation, summarization, programming, and Taiwanese values. The fine tune data consists of 128K prompt-response pairs and will be released publicly later. # Evaluation * taide-bench * Data * Tasks include writing articles, writing letters, summarizing articles, translating from English to Traditional Chinese, translating from Traditional Chinese to English. There are 500 questions in total. * data link: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * Evaluation method * LLM as a Judge by GPT4 * code link: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * Scores | Model | Translating from Traditional Chinese to English | Translating from English to Traditional Chinese | Summerization | Writing articles | Writing letters | Average | | --- | ----- | ----- | ---- | ---- | ---- | --- | | Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # License * [Llama3-TAIDE Models Community License Agreement](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view) # Disclaimer * Due to limitations in its design architecture and the inevitable biases in data, any response from the LLM model does not represent the stance of TAIDE. Additional security measures should be implemented before use, and responses may also contain incorrect information. Users are advised not to fully trust the responses. # Development Team * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # Useful links * [TAIDE official website](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/) # References * [TAIDE official website](https://taide.tw/index)
digiplay/FishMix_v1.1
digiplay
"2024-06-04T17:00:01Z"
1,544
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-19T18:06:29Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/15745?modelVersionId=27424 Sample image generated by Huggingface's API : 4k 8k best quality, high resolution, distinct image, (many (detailed) little cats) and one lady:1.3), focus on cat, little (detailed) cats around girl,background is back alley,sunlight, sitting, girl looking viewer, front view, (happy:1.3) , (kitten), pink dress, , ![f8ca32dc-589b-4d69-9e48-86015bf9225e.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/Gft3_RbJ0NFvTqVqmkPiC.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/230418ca-e751-4a48-7787-bc5dbf1b7d00/width=1024/301788.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f223b0d9-b8f3-4d7b-3399-5d3cdf30cd00/width=1024/301787.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2421a1f1-2499-4c58-f126-afe3ab8e3400/width=1024/301783.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d3d41b95-2eb4-409e-ba7e-37ce45f36600/width=1024/301782.jpeg)
AlekseyKorshuk/sdxl-v0-product-outpainting
AlekseyKorshuk
"2024-05-28T23:10:46Z"
1,544
5
diffusers
[ "diffusers", "safetensors", "diffusers:StableDiffusionXLInpaintPipeline", "region:us" ]
image-to-image
"2024-03-07T21:19:22Z"
Follow me: - HuggingFace: https://huggingface.co/AlekseyKorshuk - GitHub: https://github.com/AlekseyKorshuk - Twitter / X: https://x.com/alekseykorshuk
RichardErkhov/M4-ai_-_tau-0.5B-gguf
RichardErkhov
"2024-06-24T22:14:48Z"
1,544
0
null
[ "gguf", "region:us" ]
null
"2024-06-24T22:10:28Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tau-0.5B - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/tau-0.5B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tau-0.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q2_K.gguf) | Q2_K | 0.23GB | | [tau-0.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.24GB | | [tau-0.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_S.gguf) | IQ3_S | 0.25GB | | [tau-0.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.25GB | | [tau-0.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_M.gguf) | IQ3_M | 0.26GB | | [tau-0.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K.gguf) | Q3_K | 0.26GB | | [tau-0.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.26GB | | [tau-0.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.28GB | | [tau-0.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.28GB | | [tau-0.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_0.gguf) | Q4_0 | 0.29GB | | [tau-0.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ4_NL.gguf) | IQ4_NL | 0.29GB | | [tau-0.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.29GB | | [tau-0.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K.gguf) | Q4_K | 0.3GB | | [tau-0.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.3GB | | [tau-0.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_1.gguf) | Q4_1 | 0.3GB | | [tau-0.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_0.gguf) | Q5_0 | 0.32GB | | [tau-0.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.32GB | | [tau-0.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K.gguf) | Q5_K | 0.33GB | | [tau-0.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.33GB | | [tau-0.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_1.gguf) | Q5_1 | 0.34GB | | [tau-0.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q6_K.gguf) | Q6_K | 0.36GB | | [tau-0.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q8_0.gguf) | Q8_0 | 0.47GB | Original model description: --- license: other datasets: - Locutusque/UltraTextbooks-2.0 inference: parameters: do_sample: true temperature: 0.8 top_p: 0.95 top_k: 40 max_new_tokens: 250 repetition_penalty: 1.1 language: - en - zh --- # tau-0.5B ## Model Details - **Model Name:** tau-0.5B - **Base Model:** Qwen1.5-0.5B - **Dataset:** UltraTextbooks-2.0 - **Model Size:** 0.5B parameters - **Model Type:** Language Model - **Training Procedure:** Further pre-training of Qwen1.5-0.5B on UltraTextbooks-2.0. ## Model Use tau-0.5B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as: - Educational question answering - Text summarization - Content generation for educational purposes - Code understanding and generation - Mathematical problem solving The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research. ## Training Data tau-0.5B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding. For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0). ## Performance and Limitations Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement. ## Environmental Impact The training of tau-0.5B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint. ## Ethical Considerations tau-0.5B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups. ## Evaluation | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------------------------------|-------|------|-----:|--------|-----:|---|-----:| |agieval_nous |N/A |none | 0|acc |0.2235|± |0.0434| | | |none | 0|acc_norm|0.2141|± |0.0498| | - agieval_aqua_rat | 1|none | 0|acc |0.1417|± |0.0219| | | |none | 0|acc_norm|0.1535|± |0.0227| | - agieval_logiqa_en | 1|none | 0|acc |0.2796|± |0.0176| | | |none | 0|acc_norm|0.3118|± |0.0182| | - agieval_lsat_ar | 1|none | 0|acc |0.2000|± |0.0264| | | |none | 0|acc_norm|0.1696|± |0.0248| | - agieval_lsat_lr | 1|none | 0|acc |0.2275|± |0.0186| | | |none | 0|acc_norm|0.2020|± |0.0178| | - agieval_lsat_rc | 1|none | 0|acc |0.1487|± |0.0217| | | |none | 0|acc_norm|0.1561|± |0.0222| | - agieval_sat_en | 1|none | 0|acc |0.2330|± |0.0295| | | |none | 0|acc_norm|0.2039|± |0.0281| | - agieval_sat_en_without_passage| 1|none | 0|acc |0.2524|± |0.0303| | | |none | 0|acc_norm|0.1942|± |0.0276| | - agieval_sat_math | 1|none | 0|acc |0.2227|± |0.0281| | | |none | 0|acc_norm|0.1682|± |0.0253| | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |---------------------------------------|-------|----------------|-----:|-----------|-----:|---|-----:| |truthfulqa | 2|none | 0|acc |0.3931|± |0.0143| |mmlu |N/A |none | 0|acc |0.3642|± |0.0040| | - humanities |N/A |none | 5|acc |0.3320|± |0.0068| | - formal_logic | 0|none | 5|acc |0.2619|± |0.0393| | - high_school_european_history | 0|none | 5|acc |0.4909|± |0.0390| | - high_school_us_history | 0|none | 5|acc |0.4167|± |0.0346| | - high_school_world_history | 0|none | 5|acc |0.4641|± |0.0325| | - international_law | 0|none | 5|acc |0.5537|± |0.0454| | - jurisprudence | 0|none | 5|acc |0.4167|± |0.0477| | - logical_fallacies | 0|none | 5|acc |0.2638|± |0.0346| | - moral_disputes | 0|none | 5|acc |0.3757|± |0.0261| | - moral_scenarios | 0|none | 5|acc |0.2402|± |0.0143| | - philosophy | 0|none | 5|acc |0.3794|± |0.0276| | - prehistory | 0|none | 5|acc |0.3426|± |0.0264| | - professional_law | 0|none | 5|acc |0.3103|± |0.0118| | - world_religions | 0|none | 5|acc |0.2807|± |0.0345| | - other |N/A |none | 5|acc |0.4071|± |0.0088| | - business_ethics | 0|none | 5|acc |0.4200|± |0.0496| | - clinical_knowledge | 0|none | 5|acc |0.4491|± |0.0306| | - college_medicine | 0|none | 5|acc |0.3873|± |0.0371| | - global_facts | 0|none | 5|acc |0.3600|± |0.0482| | - human_aging | 0|none | 5|acc |0.3498|± |0.0320| | - management | 0|none | 5|acc |0.4854|± |0.0495| | - marketing | 0|none | 5|acc |0.5470|± |0.0326| | - medical_genetics | 0|none | 5|acc |0.4000|± |0.0492| | - miscellaneous | 0|none | 5|acc |0.4291|± |0.0177| | - nutrition | 0|none | 5|acc |0.4183|± |0.0282| | - professional_accounting | 0|none | 5|acc |0.3582|± |0.0286| | - professional_medicine | 0|none | 5|acc |0.3015|± |0.0279| | - virology | 0|none | 5|acc |0.3494|± |0.0371| | - social_sciences |N/A |none | 5|acc |0.4075|± |0.0088| | - econometrics | 0|none | 5|acc |0.2719|± |0.0419| | - high_school_geography | 0|none | 5|acc |0.5000|± |0.0356| | - high_school_government_and_politics| 0|none | 5|acc |0.4611|± |0.0360| | - high_school_macroeconomics | 0|none | 5|acc |0.4051|± |0.0249| | - high_school_microeconomics | 0|none | 5|acc |0.3908|± |0.0317| | - high_school_psychology | 0|none | 5|acc |0.4239|± |0.0212| | - human_sexuality | 0|none | 5|acc |0.3893|± |0.0428| | - professional_psychology | 0|none | 5|acc |0.3399|± |0.0192| | - public_relations | 0|none | 5|acc |0.4455|± |0.0476| | - security_studies | 0|none | 5|acc |0.3510|± |0.0306| | - sociology | 0|none | 5|acc |0.5174|± |0.0353| | - us_foreign_policy | 0|none | 5|acc |0.5500|± |0.0500| | - stem |N/A |none | 5|acc |0.3276|± |0.0083| | - abstract_algebra | 0|none | 5|acc |0.3000|± |0.0461| | - anatomy | 0|none | 5|acc |0.2889|± |0.0392| | - astronomy | 0|none | 5|acc |0.3487|± |0.0388| | - college_biology | 0|none | 5|acc |0.3403|± |0.0396| | - college_chemistry | 0|none | 5|acc |0.2600|± |0.0441| | - college_computer_science | 0|none | 5|acc |0.3800|± |0.0488| | - college_mathematics | 0|none | 5|acc |0.3300|± |0.0473| | - college_physics | 0|none | 5|acc |0.2745|± |0.0444| | - computer_security | 0|none | 5|acc |0.4300|± |0.0498| | - conceptual_physics | 0|none | 5|acc |0.3447|± |0.0311| | - electrical_engineering | 0|none | 5|acc |0.3931|± |0.0407| | - elementary_mathematics | 0|none | 5|acc |0.3095|± |0.0238| | - high_school_biology | 0|none | 5|acc |0.4161|± |0.0280| | - high_school_chemistry | 0|none | 5|acc |0.2759|± |0.0314| | - high_school_computer_science | 0|none | 5|acc |0.3100|± |0.0465| | - high_school_mathematics | 0|none | 5|acc |0.3185|± |0.0284| | - high_school_physics | 0|none | 5|acc |0.2517|± |0.0354| | - high_school_statistics | 0|none | 5|acc |0.3009|± |0.0313| | - machine_learning | 0|none | 5|acc |0.3036|± |0.0436| |medqa_4options |Yaml |none | 5|acc |0.2687|± |0.0124| | | |none | 5|acc_norm |0.2687|± |0.0124| |logieval | 0|get-answer | 5|exact_match|0.3505|± |0.0120| |gsm8k_cot | 3|strict-match | 8|exact_match|0.0690|± |0.0070| | | |flexible-extract| 8|exact_match|0.1365|± |0.0095| | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|------:|------|-----:|--------|-----:|---|-----:| |arc_easy | 1|none | 25|acc |0.5981|± |0.0101| | | |none | 25|acc_norm|0.5939|± |0.0101| |arc_challenge| 1|none | 25|acc |0.2688|± |0.0130| | | |none | 25|acc_norm|0.2969|± |0.0134| ## Usage Rights Make sure to read Qwen's license before using this model.
microsoft/tapex-base
microsoft
"2023-05-03T03:48:52Z"
1,543
34
transformers
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "arxiv:2107.07653", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
table-question-answering
"2022-03-02T23:29:05Z"
--- language: en tags: - tapex - table-question-answering license: mit --- # TAPEX (base-sized model) TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining). ## Model description TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. ## Intended Uses You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you. ### How to Use Here is how to use this model in transformers: ```python from transformers import TapexTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base") model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) # tapex accepts uncased input since it is pre-trained on the uncased corpus query = "select year where city = beijing" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model.generate(**encoding) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ['2008'] ``` ### How to Fine-tuning Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex). ### BibTeX entry and citation info ```bibtex @inproceedings{ liu2022tapex, title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor}, author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=O50443AsCP} } ```
intfloat/e5-small-unsupervised
intfloat
"2023-07-27T03:55:32Z"
1,543
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-01-31T03:03:08Z"
--- tags: - sentence-transformers - Sentence Transformers - sentence-similarity language: - en license: mit --- # E5-small-unsupervised **This model is similar to [e5-small](https://huggingface.co/intfloat/e5-small) but without supervised fine-tuning.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-unsupervised') model = AutoModel.from_pretrained('intfloat/e5-small-unsupervised') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-unsupervised') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
Melonie/text_to_image_finetuned
Melonie
"2023-07-26T16:39:41Z"
1,543
7
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2023-07-26T16:18:50Z"
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Melonie/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
DTAI-KULeuven/robbert-2023-dutch-base
DTAI-KULeuven
"2023-12-05T15:25:15Z"
1,543
4
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "Dutch", "Flemish", "RoBERTa", "RobBERT", "BERT", "nl", "dataset:oscar", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2001.06286", "arxiv:1907.11692", "arxiv:2310.03477", "arxiv:1909.11942", "arxiv:2211.08192", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-12-05T12:04:22Z"
--- language: nl thumbnail: https://github.com/iPieter/RobBERT/raw/master/res/robbert_2023_logo.png tags: - Dutch - Flemish - RoBERTa - RobBERT - BERT license: mit datasets: - oscar - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: Hallo, mijn naam is RobBERT-2023. Het <mask> taalmodel van UGent en KU Leuven. --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_2023_logo.png" alt="RobBERT-2023: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT-2023: Keeping Dutch Language Models Up-To-Date RobBERT-2023 is the 2023 release of the [Dutch RobBERT model](https://pieter.ai/robbert/). It is a new version of original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) model on the 2023 version of the OSCAR version. We release a base model, but this time we also release an additional large model with 355M parameters (x3 over robbert-2022-base). We are particularly proud of the performance of both models, surpassing both the robbert-v2-base and robbert-2022-base models with +2.9 and +0.9 points on the [DUMB benchmark](https://dumbench.nl) from GroNLP. In addition, we also surpass BERTje with +18.6 points with `robbert-2023-dutch-large`. The original RobBERT model was released in January 2020. Dutch has evolved a lot since then, for example the COVID-19 pandemic introduced a wide range of new words that were suddenly used daily. Also, many other world facts that the original model considered true have also changed. To account for this and other changes in usage, we release a new Dutch BERT model trained on data from 2022: RobBERT 2023. More in-depth information about RobBERT-2023 can be found in our [blog post](https://pieter.ai/robbert-2023/), [the original RobBERT paper](https://arxiv.org/abs/2001.06286) and [the RobBERT Github repository](https://github.com/iPieter/RobBERT). ## How to use RobBERT-2023 and RobBERT both use the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using [code to finetune RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) models and most code used for BERT models, e.g. as provided by [HuggingFace Transformers](https://huggingface.co/transformers/) library. By default, RobBERT-2023 has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `pdelobelle/robbert-2023-dutch-large`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DTAI-KULeuven/robbert-2023-dutch-base") model = AutoModelForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2023-dutch-base") ``` You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset. ## Comparison of Available Dutch BERT models There is a wide variety of Dutch BERT-based models available for fine-tuning on your tasks. Here's a quick summary to find the one that suits your need: - [DTAI-KULeuven/robbert-2023-dutch-large](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large): The RobBERT-2023 is the first Dutch large (355M parameters) model. It is trained on OSCAR2023 with a new tokenizer, using [our Tik-to-Tok method](https://arxiv.org/pdf/2310.03477.pdf). - **(this model)** [DTAI-KULeuven/robbert-2023-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-base): The RobBERT-2023 is a new RobBERT model on the OSCAR2023 dataset with a completely new tokenizer. It is helpful for tasks that rely on words and/or information about more recent events. - [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base): The RobBERT-2022 is a further pre-trained RobBERT model on the OSCAR2022 dataset. It is helpful for tasks that rely on words and/or information about more recent events. - [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base): The RobBERT model has for years been the best performing BERT-like model for most language tasks. It is trained on a large Dutch webcrawled dataset (OSCAR) and uses the superior [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) architecture, which robustly optimized the original [BERT model](https://huggingface.co/docs/transformers/model_doc/bert). - [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-mergedRobBERTje): The RobBERTje model is a distilled version of RobBERT and about half the size and four times faster to perform inference on. This can help deploy more scalable language models for your language task There's also the [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) "BERTje" model. This model uses the outdated basic BERT model, and is trained on a smaller corpus of clean Dutch texts. Thanks to RobBERT's more recent architecture as well as its larger and more real-world-like training corpus, most researchers and practitioners seem to achieve higher performance on their language tasks with the RobBERT model. ## How to Replicate Our Paper Experiments Replicating our paper experiments is [described in detail on the RobBERT repository README](https://github.com/iPieter/RobBERT#how-to-replicate-our-paper-experiments). The pretraining depends on the model, for RobBERT-2023 this is based on [our Tik-to-Tok method](https://arxiv.org/pdf/2310.03477.pdf). ## Name Origin of RobBERT Most BERT-like models have the word *BERT* in their name (e.g. [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html), [ALBERT](https://arxiv.org/abs/1909.11942), [CamemBERT](https://camembert-model.fr/), and [many, many others](https://huggingface.co/models?search=bert)). As such, we queried our original RobBERT model using its masked language model to name itself *\\<mask\\>bert* using [all](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Mijn+naam+is+%3Cmask%3Ebert.) [kinds](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Hallo%2C+ik+ben+%3Cmask%3Ebert.) [of](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Leuk+je+te+ontmoeten%2C+ik+heet+%3Cmask%3Ebert.) [prompts](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Niemand+weet%2C+niemand+weet%2C+dat+ik+%3Cmask%3Ebert+heet.), and it consistently called itself RobBERT. We thought it was really quite fitting, given that RobBERT is a [*very* Dutch name](https://en.wikipedia.org/wiki/Robbert) *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html). Since *"rob"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like [Bert from Sesame Street](https://muppet.fandom.com/wiki/Bert) for the [RobBERT logo](https://github.com/iPieter/RobBERT/blob/master/res/robbert_logo.png). ## Credits and citation The suite of RobBERT models are created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be), [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/) and [François Remy](http://fremycompany.com). If you would like to cite our paper or model, you can use the following BibTeX: ``` @misc{delobelle2023robbert2023conversion, author = {Delobelle, P and Remy, F}, month = {Sep}, organization = {Antwerp, Belgium}, title = {RobBERT-2023: Keeping Dutch Language Models Up-To-Date at a Lower Cost Thanks to Model Conversion}, year = {2023}, startyear = {2023}, startmonth = {Sep}, startday = {22}, finishyear = {2023}, finishmonth = {Sep}, finishday = {22}, venue = {The 33rd Meeting of Computational Linguistics in The Netherlands (CLIN 33)}, day = {22}, publicationstatus = {published}, url= {https://clin33.uantwerpen.be/abstract/robbert-2023-keeping-dutch-language-models-up-to-date-at-a-lower-cost-thanks-to-model-conversion/} } @inproceedings{delobelle2022robbert2022, doi = {10.48550/ARXIV.2211.08192}, url = {https://arxiv.org/abs/2211.08192}, author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use}, venue = {arXiv}, year = {2022}, } @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
lmg-anon/vntl-llama3-8b-gguf
lmg-anon
"2024-06-15T17:33:02Z"
1,543
4
null
[ "gguf", "translation", "ja", "en", "dataset:lmg-anon/VNTL-v3.1-1k", "dataset:lmg-anon/VNTL-Chat", "license:llama3", "region:us" ]
translation
"2024-06-13T17:17:30Z"
--- license: llama3 datasets: - lmg-anon/VNTL-v3.1-1k - lmg-anon/VNTL-Chat language: - ja - en pipeline_tag: translation --- This repository contains some GGUF quantizations of the merge of the [VNTL LLaMA 3 8B qlora](https://huggingface.co/lmg-anon/vntl-llama3-8b-qlora). This model also includes a new "chat mode", which works mostly for Japanese grammar questions. ## Translation Prompt This is an prompt example for translation: ``` <<METADATA>> [character] Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん) [character] Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female <<TRANSLATE>> <<JAPANESE>> [桜乃]: 『……ごめん』 <<ENGLISH>> [Sakuno]: 『... Sorry.』<|end_of_text|> <<JAPANESE>> [新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」 <<ENGLISH>> ``` The generated translation for that prompt, with temperature 0, is: ``` [Shingo]: 「No, I know this isn't the best way to say it, but I'm glad you got lost. You're so cute that I was worried about you.」 ``` ## Chat Prompt This is an prompt example for chat: ``` <<CHAT>> <<HUMAN>> Hi! Can you break this down to me? Like, what is the meaning of "桜乃は可愛いから"? "ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」" <<LLM>> ``` The generated response for that prompt, with temperature 0, is: ``` Certainly! The phrase "桜乃は可愛いから" literally translates to "Because Sakura-no is cute." In Japanese culture, there is a strong emphasis on physical attractiveness and cuteness, especially when it comes to young girls or women. This phrase suggests that the speaker finds Sakura-no (presumably a girl) to be very attractive and endearing in appearance. ```
timm/tiny_vit_11m_224.dist_in22k_ft_in1k
timm
"2023-09-01T18:12:46Z"
1,542
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2207.10666", "license:apache-2.0", "region:us" ]
image-classification
"2023-09-01T16:04:17Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for tiny_vit_11m_224.dist_in22k_ft_in1k A TinyViT image classification model. Pretrained on ImageNet-22k with distillation and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 11.0 - GMACs: 1.9 - Activations (M): 10.7 - Image size: 224 x 224 - **Papers:** - TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666 - **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tiny_vit_11m_224.dist_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_11m_224.dist_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 448, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_11m_224.dist_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 448, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{tiny_vit, title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers}, author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu}, booktitle={European conference on computer vision (ECCV)}, year={2022} } ```
SinpxAI/Vicuna-7B-v1.3-GGUF
SinpxAI
"2024-03-07T13:27:44Z"
1,541
0
transformers
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-03-07T13:11:54Z"
Entry not found
vcadillo/glm-4v-9b-4-bits
vcadillo
"2024-06-16T18:28:09Z"
1,541
2
transformers
[ "transformers", "safetensors", "chatglm", "feature-extraction", "custom_code", "arxiv:2311.03079", "4-bit", "bitsandbytes", "region:us" ]
feature-extraction
"2024-06-08T01:31:53Z"
# GLM-4V-9B-4bits ## Quick Start ```python import torch from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" tokenizer = AutoTokenizer.from_pretrained("vcadillo/glm-4v-9b-4-bits", trust_remote_code=True) query = 'discribe this image' image = Image.open("your image").convert('RGB') inputs = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": query}], add_generation_prompt=True, tokenize=True, return_tensors="pt", return_dict=True) # chat mode inputs = inputs.to(device) model = AutoModelForCausalLM.from_pretrained( "vcadillo/glm-4v-9b-4-bits", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, device_map='auto', ).eval() gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1} with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] print(tokenizer.decode(outputs[0])) ``` ## License The use of the GLM-4 model weights needs to comply with the [LICENSE](LICENSE). ## Citation If you find our work helpful, please consider citing the following papers. ``` @article{zeng2022glm, title={Glm-130b: An open bilingual pre-trained model}, author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others}, journal={arXiv preprint arXiv:2210.02414}, year={2022} } ``` ``` @inproceedings{du2022glm, title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling}, author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={320--335}, year={2022} } ``` ``` @misc{wang2023cogvlm, title={CogVLM: Visual Expert for Pretrained Language Models}, author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2311.03079}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
LyliaEngine/ponyRealism_v21MainVAE
LyliaEngine
"2024-06-23T14:41:05Z"
1,541
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:LyliaEngine/Pony_Diffusion_V6_XL", "license:cdla-permissive-2.0", "region:us" ]
text-to-image
"2024-06-23T13:27:39Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- score_9, score_8_up, score_7_up, BREAK, innocent and alluring, long hair, alternative vibe, beautiful eyes, freckles, medium breasts, subtle cleavage, tight body, slutty, candid picture, (cozy bedroom), off-shoulder shirt, dynamic angle, vibrant lighting, high contrast, dramatic shadows, highly detailed, detailed skin, depth of field, film grain parameters: negative_prompt: score_1, score_2, score_3, text output: url: images/00000-253699598.jpeg - text: >- score_9, score_8_up, score_7_up, BREAK, woman, bikini, skin pores, little smirk, sunglasses, pool, sunny, 80s vibe, depth of field, film grain parameters: negative_prompt: score 1, score 2, score 3, text, penetration output: url: images/00021-3188932120.jpeg base_model: LyliaEngine/Pony_Diffusion_V6_XL instance_prompt: None license: cdla-permissive-2.0 --- # ponyRealism_v21MainVAE <Gallery /> ## Model description ⚠️ Please read the description 🏋️‍♀️ For training LoRas using the model go [here](https://civitai.com/articles/5545) ♻️ v2.1 Is a major rework of the model. Main version has the most detail at the cost of Sampler compatibility Alternative version works on most Samplers at the cost of some detail ℹ️ This version leans more towards Pony for more overall variety, but maintains the realistic quality. ℹ️ Example images are without Adetailer 📍Recommended Parameters For the MAIN version: · Samplers: DPM++ SDE DPM++ 2S a Karras DPM++ SDE Karras DPM++ 2M SDE Karras DPM++ 2M SDE Exponential DPM2 a DPM++ 2S a DPM++ 3M * I DO NOT recommend the use of the following samplers (weak detail): Euler a Euler · Steps: 20 - 30 . · CFG: 6-7 · Clip Skip: 2 · Resolution: I have been using 1280x768 | 768x1280 · I recommend using female&#x2F;male instead of woman&#x2F;man ℹ️ Samplers not listed on the recommended won&#39;t work or need some tweaking in CFG or steps. This version does not need any trigger words, and as like most Pony models it uses the following prompt style: Positive: score_9, score_8_up, score_7_up, BREAK Negative: score_4, score_5, score_6 💪The model has more general variety, creativity and better prompt adherence. At longer distances ADETAILER is recommended. 💕This is probably going to be the last version of the model for the moment as Im quite happy with the result. Thanks for downloading and I hope you enjoy it :) ☕If you want to support me feel free to do so on Ko-Fi :) ⚡️BUZZ FOR BEST IMAGES⚡️ ## Source https://civitai.com/models/372465/pony-realism?modelVersionId=534642 ## Credit https://civitai.com/user/ZyloO ## Trigger words You should use `None` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LyliaEngine/ponyRealism_v21MainVAE/tree/main) them in the Files & versions tab.
daekeun-ml/phi-2-ko-v0.1
daekeun-ml
"2024-02-12T07:34:49Z"
1,540
22
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "ko", "en", "dataset:wikimedia/wikipedia", "dataset:maywell/korean_textbooks", "dataset:nampdn-ai/tiny-codes", "dataset:Open-Orca/OpenOrca", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-07T23:37:33Z"
--- library_name: transformers license: cc-by-sa-3.0 datasets: - wikimedia/wikipedia - maywell/korean_textbooks - nampdn-ai/tiny-codes - Open-Orca/OpenOrca language: - ko - en inference: false --- # phi-2-ko-v0.1 ## Model Details This model is a Korean-specific model trained in phi-2 by adding a Korean tokenizer and Korean data. (English is also available.) Although phi-2 performs very well, it does not support the Korean language and does not have a tokenizer trained on Korean corpous, so tokenizing Korean text will use many times more tokens than English tokens. To overcome these limitations, I trained the model using an open-license Korean corpus and some English corpus. The reasons for using the English corpus together are as follows: 1. The goal is to preserve the excellent performance of the existing model by preventing catastrophic forgetting. 2. Mixing English and Korean prompts usually produces better results than using all prompts in Korean. Since my role is not as a working developer, but as an solutions architect helping customers with quick PoCs/prototypes, and I was limited by AWS GPU resources available, I only trained with 5GB of data instead of hundreds of GB of massive data. ### Vocab Expansion | Model Name | Vocabulary Size | Description | | --- | --- | --- | | Original phi-2 | 50,295 | BBPE (Byte-level BPE) | | **phi-2-ko** | 66,676 | BBPE. Added Korean vocab and merges | **Tokenizing "아마존 세이지메이커"** | Model | # of tokens | Tokens | | --- | --- | --- | | Original phi-2 | 25 | `[168, 243, 226, 167, 100, 230, 168, 94, 112, 23821, 226, 116, 35975, 112, 168, 100, 222, 167, 102, 242, 35975, 112, 168, 119, 97]` | | **phi-2-ko** |6| `[57974, 51299, 50617, 51005, 52027, 51446]` | ### Continued pre-training The dataset used for training is as follows. To prevent catastrophic forgetting, I included some English corpus as training data. - Wikipedia Korean dataset (https://huggingface.co/datasets/wikimedia/wikipedia) - Massive Korean synthetic dataset (https://huggingface.co/datasets/maywell/korean_textbooks) - Tiny code dataset (https://huggingface.co/datasets/nampdn-ai/tiny-codes) - OpenOrca dataset (https://huggingface.co/datasets/Open-Orca/OpenOrca) - Using some of the various sentences I wrote (personal blog, chat, etc.) Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 5 million after tokenization. For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows. Since this is a model that has not been fine-tuned, it is recommended to perform fine tuning such as instruction tuning/alignment tuning according to your use case. ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true, "cpu_offload": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto" } ``` Some hyperparameters are listed below. ``` batch_size: 2 num_epochs: 1 learning_rate: 3e-4 gradient_accumulation_steps: 8 lr_scheduler_type: "linear" group_by_length: False ``` ## How to Get Started with the Model ```python import torch from transformers import PhiForCausalLM, AutoModelForCausalLM, AutoTokenizer torch.set_default_device("cuda") # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained("daekeun-ml/phi-2-ko-v0.1", torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/phi-2-ko-v0.1", trust_remote_code=True) # Korean inputs = tokenizer("머신러닝은 ", return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) # English inputs = tokenizer('''def print_prime(n): """ Print all primes between 1 and n """''', return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` ### References - Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) ## Notes ### License cc-by-sa 3.0; The license of phi-2 is MIT, but I considered the licensing of the dataset used for training. ### Caution This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)!
TheBloke/Chronos-Hermes-13b-v2-GGUF
TheBloke
"2023-09-27T13:02:40Z"
1,539
12
transformers
[ "transformers", "gguf", "llama", "llama-2", "pytorch", "chatbot", "storywriting", "generalist-model", "base_model:Austism/chronos-hermes-13b-v2", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-09-08T12:41:26Z"
--- license: llama2 tags: - llama - llama-2 - pytorch - chatbot - storywriting - generalist-model model_name: Chronos Hermes 13B v2 inference: false model_creator: Austism model_link: https://huggingface.co/Austism/chronos-hermes-13b-v2 model_type: llama quantized_by: TheBloke base_model: Austism/chronos-hermes-13b-v2 --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Chronos Hermes 13B v2 - GGUF - Model creator: [Austism](https://huggingface.co/Austism) - Original model: [Chronos Hermes 13B v2](https://huggingface.co/Austism/chronos-hermes-13b-v2) ## Description This repo contains GGUF format model files for [Austism's Chronos Hermes 13B v2](https://huggingface.co/Austism/chronos-hermes-13b-v2). <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates. Here are a list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML) * [Austism's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b-v2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) They are now also compatible with many third party UIs and libraries - please see the list at the top of the README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [chronos-hermes-13b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [chronos-hermes-13b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [chronos-hermes-13b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [chronos-hermes-13b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [chronos-hermes-13b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [chronos-hermes-13b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [chronos-hermes-13b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [chronos-hermes-13b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [chronos-hermes-13b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [chronos-hermes-13b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [chronos-hermes-13b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [chronos-hermes-13b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF/blob/main/chronos-hermes-13b-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later. For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead. ``` ./main -t 10 -ngl 32 -m chronos-hermes-13b-v2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronos-Hermes-13b-v2-GGUF", model_file="chronos-hermes-13b-v2.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Austism's Chronos Hermes 13B v2 # chronos-hermes-13b-v2 ([chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) + [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)) 75/25 merge This offers the imaginative writing style of chronos while still retaining coherency and being capable. Outputs are long and utilize exceptional prose. Supports a maxium context length of 4096. - [GPTQ Quantized Weights](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ) ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: <prompt> ### Response: ``` This is an adaption of [chronos-hermes-13b](https://huggingface.co/Austism/chronos-hermes-13b) for llama-2. <!-- original-model-card end -->
Chrisisis/5FpnXrSQT6kkkYFGTE8z32JXMBYNmxReTcoqyRjEiiACWhDr_vgg
Chrisisis
"2024-02-24T08:27:36Z"
1,539
0
keras
[ "keras", "region:us" ]
null
"2024-02-05T18:40:11Z"
Entry not found
Alibaba-NLP/gte-Qwen1.5-7B-instruct
Alibaba-NLP
"2024-06-05T03:26:29Z"
1,539
84
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen2", "text-generation", "mteb", "transformers", "Qwen", "sentence-similarity", "custom_code", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-20T04:24:58Z"
--- tags: - mteb - sentence-transformers - transformers - Qwen - sentence-similarity license: apache-2.0 model-index: - name: gte-qwen1.5-7b results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 83.16417910447761 - type: ap value: 49.37655308937739 - type: f1 value: 77.52987230462615 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.6959 - type: ap value: 94.90885739242472 - type: f1 value: 96.69477648952649 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.168 - type: f1 value: 60.411431278343755 - task: type: Retrieval dataset: type: mteb/arguana name: MTEB ArguAna config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.415 - type: map_at_10 value: 53.505 - type: map_at_100 value: 54.013 - type: map_at_1000 value: 54.013 - type: map_at_3 value: 48.459 - type: map_at_5 value: 51.524 - type: mrr_at_1 value: 36.842000000000006 - type: mrr_at_10 value: 53.679 - type: mrr_at_100 value: 54.17999999999999 - type: mrr_at_1000 value: 54.17999999999999 - type: mrr_at_3 value: 48.613 - type: mrr_at_5 value: 51.696 - type: ndcg_at_1 value: 36.415 - type: ndcg_at_10 value: 62.644999999999996 - type: ndcg_at_100 value: 64.60000000000001 - type: ndcg_at_1000 value: 64.60000000000001 - type: ndcg_at_3 value: 52.44799999999999 - type: ndcg_at_5 value: 57.964000000000006 - type: precision_at_1 value: 36.415 - type: precision_at_10 value: 9.161 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 21.337 - type: precision_at_5 value: 15.476999999999999 - type: recall_at_1 value: 36.415 - type: recall_at_10 value: 91.607 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 64.011 - type: recall_at_5 value: 77.383 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.40183100758549 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.44814171373338 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 66.00208703259058 - type: mrr value: 78.95165545442553 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.12591694410098 - type: cos_sim_spearman value: 81.11570369802254 - type: euclidean_pearson value: 80.91709076204458 - type: euclidean_spearman value: 81.11570369802254 - type: manhattan_pearson value: 80.71719561024605 - type: manhattan_spearman value: 81.21510355327713 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.67857142857142 - type: f1 value: 80.84103272994895 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 49.008657468552016 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 45.05901064421589 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 32.694 - type: map_at_10 value: 43.895 - type: map_at_100 value: 45.797 - type: map_at_1000 value: 45.922000000000004 - type: map_at_3 value: 40.141 - type: map_at_5 value: 42.077 - type: mrr_at_1 value: 40.2 - type: mrr_at_10 value: 50.11 - type: mrr_at_100 value: 51.101 - type: mrr_at_1000 value: 51.13100000000001 - type: mrr_at_3 value: 47.735 - type: mrr_at_5 value: 48.922 - type: ndcg_at_1 value: 40.2 - type: ndcg_at_10 value: 50.449999999999996 - type: ndcg_at_100 value: 56.85 - type: ndcg_at_1000 value: 58.345 - type: ndcg_at_3 value: 45.261 - type: ndcg_at_5 value: 47.298 - type: precision_at_1 value: 40.2 - type: precision_at_10 value: 9.742 - type: precision_at_100 value: 1.6480000000000001 - type: precision_at_1000 value: 0.214 - type: precision_at_3 value: 21.841 - type: precision_at_5 value: 15.68 - type: recall_at_1 value: 32.694 - type: recall_at_10 value: 62.751999999999995 - type: recall_at_100 value: 88.619 - type: recall_at_1000 value: 97.386 - type: recall_at_3 value: 47.087 - type: recall_at_5 value: 53.108999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 27.849 - type: map_at_10 value: 37.938 - type: map_at_100 value: 39.211 - type: map_at_1000 value: 39.333 - type: map_at_3 value: 35.314 - type: map_at_5 value: 36.666 - type: mrr_at_1 value: 34.904 - type: mrr_at_10 value: 43.869 - type: mrr_at_100 value: 44.614 - type: mrr_at_1000 value: 44.662 - type: mrr_at_3 value: 41.815000000000005 - type: mrr_at_5 value: 42.943 - type: ndcg_at_1 value: 34.904 - type: ndcg_at_10 value: 43.605 - type: ndcg_at_100 value: 48.339999999999996 - type: ndcg_at_1000 value: 50.470000000000006 - type: ndcg_at_3 value: 39.835 - type: ndcg_at_5 value: 41.364000000000004 - type: precision_at_1 value: 34.904 - type: precision_at_10 value: 8.222999999999999 - type: precision_at_100 value: 1.332 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 19.575 - type: precision_at_5 value: 13.58 - type: recall_at_1 value: 27.849 - type: recall_at_10 value: 53.635 - type: recall_at_100 value: 73.932 - type: recall_at_1000 value: 87.29599999999999 - type: recall_at_3 value: 42.019 - type: recall_at_5 value: 46.58 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 29.182999999999996 - type: map_at_10 value: 41.233 - type: map_at_100 value: 42.52 - type: map_at_1000 value: 42.589 - type: map_at_3 value: 37.284 - type: map_at_5 value: 39.586 - type: mrr_at_1 value: 33.793 - type: mrr_at_10 value: 44.572 - type: mrr_at_100 value: 45.456 - type: mrr_at_1000 value: 45.497 - type: mrr_at_3 value: 41.275 - type: mrr_at_5 value: 43.278 - type: ndcg_at_1 value: 33.793 - type: ndcg_at_10 value: 47.823 - type: ndcg_at_100 value: 52.994 - type: ndcg_at_1000 value: 54.400000000000006 - type: ndcg_at_3 value: 40.82 - type: ndcg_at_5 value: 44.426 - type: precision_at_1 value: 33.793 - type: precision_at_10 value: 8.312999999999999 - type: precision_at_100 value: 1.191 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 18.662 - type: precision_at_5 value: 13.668 - type: recall_at_1 value: 29.182999999999996 - type: recall_at_10 value: 64.14999999999999 - type: recall_at_100 value: 86.533 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 45.7 - type: recall_at_5 value: 54.330999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 24.389 - type: map_at_10 value: 33.858 - type: map_at_100 value: 35.081 - type: map_at_1000 value: 35.161 - type: map_at_3 value: 30.793 - type: map_at_5 value: 32.336 - type: mrr_at_1 value: 27.006000000000004 - type: mrr_at_10 value: 36.378 - type: mrr_at_100 value: 37.345 - type: mrr_at_1000 value: 37.405 - type: mrr_at_3 value: 33.578 - type: mrr_at_5 value: 34.991 - type: ndcg_at_1 value: 27.006000000000004 - type: ndcg_at_10 value: 39.612 - type: ndcg_at_100 value: 45.216 - type: ndcg_at_1000 value: 47.12 - type: ndcg_at_3 value: 33.566 - type: ndcg_at_5 value: 36.105 - type: precision_at_1 value: 27.006000000000004 - type: precision_at_10 value: 6.372999999999999 - type: precision_at_100 value: 0.968 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 14.501 - type: precision_at_5 value: 10.169 - type: recall_at_1 value: 24.389 - type: recall_at_10 value: 55.131 - type: recall_at_100 value: 80.315 - type: recall_at_1000 value: 94.284 - type: recall_at_3 value: 38.643 - type: recall_at_5 value: 44.725 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 15.845999999999998 - type: map_at_10 value: 25.019000000000002 - type: map_at_100 value: 26.478 - type: map_at_1000 value: 26.598 - type: map_at_3 value: 21.595 - type: map_at_5 value: 23.335 - type: mrr_at_1 value: 20.274 - type: mrr_at_10 value: 29.221000000000004 - type: mrr_at_100 value: 30.354999999999997 - type: mrr_at_1000 value: 30.419 - type: mrr_at_3 value: 26.161 - type: mrr_at_5 value: 27.61 - type: ndcg_at_1 value: 20.274 - type: ndcg_at_10 value: 31.014000000000003 - type: ndcg_at_100 value: 37.699 - type: ndcg_at_1000 value: 40.363 - type: ndcg_at_3 value: 24.701999999999998 - type: ndcg_at_5 value: 27.261999999999997 - type: precision_at_1 value: 20.274 - type: precision_at_10 value: 6.219 - type: precision_at_100 value: 1.101 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 12.231 - type: precision_at_5 value: 9.129 - type: recall_at_1 value: 15.845999999999998 - type: recall_at_10 value: 45.358 - type: recall_at_100 value: 74.232 - type: recall_at_1000 value: 92.985 - type: recall_at_3 value: 28.050000000000004 - type: recall_at_5 value: 34.588 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.808 - type: map_at_10 value: 46.86 - type: map_at_100 value: 48.237 - type: map_at_1000 value: 48.331 - type: map_at_3 value: 42.784 - type: map_at_5 value: 45.015 - type: mrr_at_1 value: 41.771 - type: mrr_at_10 value: 52.35300000000001 - type: mrr_at_100 value: 53.102000000000004 - type: mrr_at_1000 value: 53.132999999999996 - type: mrr_at_3 value: 49.663000000000004 - type: mrr_at_5 value: 51.27 - type: ndcg_at_1 value: 41.771 - type: ndcg_at_10 value: 53.562 - type: ndcg_at_100 value: 58.809999999999995 - type: ndcg_at_1000 value: 60.23 - type: ndcg_at_3 value: 47.514 - type: ndcg_at_5 value: 50.358999999999995 - type: precision_at_1 value: 41.771 - type: precision_at_10 value: 10.038 - type: precision_at_100 value: 1.473 - type: precision_at_1000 value: 0.17600000000000002 - type: precision_at_3 value: 22.875 - type: precision_at_5 value: 16.477 - type: recall_at_1 value: 33.808 - type: recall_at_10 value: 67.721 - type: recall_at_100 value: 89.261 - type: recall_at_1000 value: 98.042 - type: recall_at_3 value: 50.807 - type: recall_at_5 value: 58.162000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.105000000000004 - type: map_at_10 value: 40.354 - type: map_at_100 value: 41.921 - type: map_at_1000 value: 42.021 - type: map_at_3 value: 36.532 - type: map_at_5 value: 38.671 - type: mrr_at_1 value: 34.475 - type: mrr_at_10 value: 45.342 - type: mrr_at_100 value: 46.300000000000004 - type: mrr_at_1000 value: 46.343 - type: mrr_at_3 value: 42.637 - type: mrr_at_5 value: 44.207 - type: ndcg_at_1 value: 34.475 - type: ndcg_at_10 value: 46.945 - type: ndcg_at_100 value: 52.939 - type: ndcg_at_1000 value: 54.645999999999994 - type: ndcg_at_3 value: 41.065000000000005 - type: ndcg_at_5 value: 43.832 - type: precision_at_1 value: 34.475 - type: precision_at_10 value: 8.892999999999999 - type: precision_at_100 value: 1.377 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 20.091 - type: precision_at_5 value: 14.452000000000002 - type: recall_at_1 value: 28.105000000000004 - type: recall_at_10 value: 61.253 - type: recall_at_100 value: 85.92 - type: recall_at_1000 value: 96.799 - type: recall_at_3 value: 45.094 - type: recall_at_5 value: 52.455 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 24.613833333333332 - type: map_at_10 value: 34.763 - type: map_at_100 value: 36.17066666666667 - type: map_at_1000 value: 36.2905 - type: map_at_3 value: 31.53541666666666 - type: map_at_5 value: 33.29216666666667 - type: mrr_at_1 value: 29.48725 - type: mrr_at_10 value: 38.92066666666667 - type: mrr_at_100 value: 39.88725000000001 - type: mrr_at_1000 value: 39.9435 - type: mrr_at_3 value: 36.284083333333335 - type: mrr_at_5 value: 37.73941666666667 - type: ndcg_at_1 value: 29.48725 - type: ndcg_at_10 value: 40.635083333333334 - type: ndcg_at_100 value: 46.479416666666665 - type: ndcg_at_1000 value: 48.63308333333334 - type: ndcg_at_3 value: 35.19483333333333 - type: ndcg_at_5 value: 37.68016666666667 - type: precision_at_1 value: 29.48725 - type: precision_at_10 value: 7.406499999999998 - type: precision_at_100 value: 1.2225833333333334 - type: precision_at_1000 value: 0.16108333333333336 - type: precision_at_3 value: 16.53375 - type: precision_at_5 value: 11.919416666666665 - type: recall_at_1 value: 24.613833333333332 - type: recall_at_10 value: 53.91766666666666 - type: recall_at_100 value: 79.18 - type: recall_at_1000 value: 93.85133333333333 - type: recall_at_3 value: 38.866166666666665 - type: recall_at_5 value: 45.21275000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 25.106 - type: map_at_10 value: 33.367999999999995 - type: map_at_100 value: 34.586 - type: map_at_1000 value: 34.681 - type: map_at_3 value: 31.022 - type: map_at_5 value: 32.548 - type: mrr_at_1 value: 28.374 - type: mrr_at_10 value: 36.521 - type: mrr_at_100 value: 37.55 - type: mrr_at_1000 value: 37.614999999999995 - type: mrr_at_3 value: 34.509 - type: mrr_at_5 value: 35.836 - type: ndcg_at_1 value: 28.374 - type: ndcg_at_10 value: 37.893 - type: ndcg_at_100 value: 43.694 - type: ndcg_at_1000 value: 46.001999999999995 - type: ndcg_at_3 value: 33.825 - type: ndcg_at_5 value: 36.201 - type: precision_at_1 value: 28.374 - type: precision_at_10 value: 5.966 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.124 - type: precision_at_3 value: 14.774999999999999 - type: precision_at_5 value: 10.459999999999999 - type: recall_at_1 value: 25.106 - type: recall_at_10 value: 48.607 - type: recall_at_100 value: 74.66000000000001 - type: recall_at_1000 value: 91.562 - type: recall_at_3 value: 37.669999999999995 - type: recall_at_5 value: 43.484 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 13.755 - type: map_at_10 value: 20.756 - type: map_at_100 value: 22.05 - type: map_at_1000 value: 22.201 - type: map_at_3 value: 18.243000000000002 - type: map_at_5 value: 19.512 - type: mrr_at_1 value: 16.93 - type: mrr_at_10 value: 24.276 - type: mrr_at_100 value: 25.349 - type: mrr_at_1000 value: 25.441000000000003 - type: mrr_at_3 value: 21.897 - type: mrr_at_5 value: 23.134 - type: ndcg_at_1 value: 16.93 - type: ndcg_at_10 value: 25.508999999999997 - type: ndcg_at_100 value: 31.777 - type: ndcg_at_1000 value: 35.112 - type: ndcg_at_3 value: 20.896 - type: ndcg_at_5 value: 22.857 - type: precision_at_1 value: 16.93 - type: precision_at_10 value: 4.972 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 10.14 - type: precision_at_5 value: 7.536 - type: recall_at_1 value: 13.755 - type: recall_at_10 value: 36.46 - type: recall_at_100 value: 64.786 - type: recall_at_1000 value: 88.287 - type: recall_at_3 value: 23.681 - type: recall_at_5 value: 28.615000000000002 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 26.99 - type: map_at_10 value: 38.009 - type: map_at_100 value: 39.384 - type: map_at_1000 value: 39.481 - type: map_at_3 value: 34.593 - type: map_at_5 value: 36.449999999999996 - type: mrr_at_1 value: 31.81 - type: mrr_at_10 value: 41.943000000000005 - type: mrr_at_100 value: 42.914 - type: mrr_at_1000 value: 42.962 - type: mrr_at_3 value: 39.179 - type: mrr_at_5 value: 40.798 - type: ndcg_at_1 value: 31.81 - type: ndcg_at_10 value: 44.086 - type: ndcg_at_100 value: 50.026 - type: ndcg_at_1000 value: 51.903999999999996 - type: ndcg_at_3 value: 38.23 - type: ndcg_at_5 value: 40.926 - type: precision_at_1 value: 31.81 - type: precision_at_10 value: 7.761 - type: precision_at_100 value: 1.205 - type: precision_at_1000 value: 0.148 - type: precision_at_3 value: 17.537 - type: precision_at_5 value: 12.649 - type: recall_at_1 value: 26.99 - type: recall_at_10 value: 58.467 - type: recall_at_100 value: 83.93 - type: recall_at_1000 value: 96.452 - type: recall_at_3 value: 42.685 - type: recall_at_5 value: 49.341 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 25.312 - type: map_at_10 value: 35.788 - type: map_at_100 value: 37.616 - type: map_at_1000 value: 37.86 - type: map_at_3 value: 32.422000000000004 - type: map_at_5 value: 34.585 - type: mrr_at_1 value: 30.631999999999998 - type: mrr_at_10 value: 40.604 - type: mrr_at_100 value: 41.745 - type: mrr_at_1000 value: 41.788 - type: mrr_at_3 value: 37.582 - type: mrr_at_5 value: 39.499 - type: ndcg_at_1 value: 30.631999999999998 - type: ndcg_at_10 value: 42.129 - type: ndcg_at_100 value: 48.943 - type: ndcg_at_1000 value: 51.089 - type: ndcg_at_3 value: 36.658 - type: ndcg_at_5 value: 39.818999999999996 - type: precision_at_1 value: 30.631999999999998 - type: precision_at_10 value: 7.904999999999999 - type: precision_at_100 value: 1.664 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 16.996 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 25.312 - type: recall_at_10 value: 54.886 - type: recall_at_100 value: 84.155 - type: recall_at_1000 value: 96.956 - type: recall_at_3 value: 40.232 - type: recall_at_5 value: 48.204 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 12.328999999999999 - type: map_at_10 value: 20.078 - type: map_at_100 value: 21.166999999999998 - type: map_at_1000 value: 21.308 - type: map_at_3 value: 17.702 - type: map_at_5 value: 18.725 - type: mrr_at_1 value: 13.678 - type: mrr_at_10 value: 21.859 - type: mrr_at_100 value: 22.816 - type: mrr_at_1000 value: 22.926 - type: mrr_at_3 value: 19.378 - type: mrr_at_5 value: 20.385 - type: ndcg_at_1 value: 13.678 - type: ndcg_at_10 value: 24.993000000000002 - type: ndcg_at_100 value: 30.464999999999996 - type: ndcg_at_1000 value: 33.916000000000004 - type: ndcg_at_3 value: 19.966 - type: ndcg_at_5 value: 21.712999999999997 - type: precision_at_1 value: 13.678 - type: precision_at_10 value: 4.473 - type: precision_at_100 value: 0.784 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 9.181000000000001 - type: precision_at_5 value: 6.506 - type: recall_at_1 value: 12.328999999999999 - type: recall_at_10 value: 38.592 - type: recall_at_100 value: 63.817 - type: recall_at_1000 value: 89.67500000000001 - type: recall_at_3 value: 24.726 - type: recall_at_5 value: 28.959000000000003 - task: type: Retrieval dataset: type: mteb/climate-fever name: MTEB ClimateFEVER config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 19.147 - type: map_at_10 value: 33.509 - type: map_at_100 value: 35.573 - type: map_at_1000 value: 35.769 - type: map_at_3 value: 27.983999999999998 - type: map_at_5 value: 31.012 - type: mrr_at_1 value: 43.844 - type: mrr_at_10 value: 56.24 - type: mrr_at_100 value: 56.801 - type: mrr_at_1000 value: 56.826 - type: mrr_at_3 value: 53.290000000000006 - type: mrr_at_5 value: 55.13 - type: ndcg_at_1 value: 43.844 - type: ndcg_at_10 value: 43.996 - type: ndcg_at_100 value: 50.965 - type: ndcg_at_1000 value: 53.927 - type: ndcg_at_3 value: 37.263000000000005 - type: ndcg_at_5 value: 39.553 - type: precision_at_1 value: 43.844 - type: precision_at_10 value: 13.687 - type: precision_at_100 value: 2.139 - type: precision_at_1000 value: 0.269 - type: precision_at_3 value: 28.122000000000003 - type: precision_at_5 value: 21.303 - type: recall_at_1 value: 19.147 - type: recall_at_10 value: 50.449999999999996 - type: recall_at_100 value: 74.00099999999999 - type: recall_at_1000 value: 90.098 - type: recall_at_3 value: 33.343 - type: recall_at_5 value: 40.744 - task: type: Retrieval dataset: type: mteb/dbpedia name: MTEB DBPedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 8.773 - type: map_at_10 value: 21.172 - type: map_at_100 value: 30.244 - type: map_at_1000 value: 32.127 - type: map_at_3 value: 14.510000000000002 - type: map_at_5 value: 17.483 - type: mrr_at_1 value: 68.25 - type: mrr_at_10 value: 77.33 - type: mrr_at_100 value: 77.529 - type: mrr_at_1000 value: 77.536 - type: mrr_at_3 value: 75.708 - type: mrr_at_5 value: 76.72099999999999 - type: ndcg_at_1 value: 60.0 - type: ndcg_at_10 value: 48.045 - type: ndcg_at_100 value: 51.620999999999995 - type: ndcg_at_1000 value: 58.843999999999994 - type: ndcg_at_3 value: 52.922000000000004 - type: ndcg_at_5 value: 50.27 - type: precision_at_1 value: 68.25 - type: precision_at_10 value: 37.625 - type: precision_at_100 value: 11.774999999999999 - type: precision_at_1000 value: 2.395 - type: precision_at_3 value: 55.25 - type: precision_at_5 value: 47.599999999999994 - type: recall_at_1 value: 8.773 - type: recall_at_10 value: 27.332 - type: recall_at_100 value: 55.48499999999999 - type: recall_at_1000 value: 79.886 - type: recall_at_3 value: 15.823 - type: recall_at_5 value: 20.523 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 54.52999999999999 - type: f1 value: 47.396628088963645 - task: type: Retrieval dataset: type: mteb/fever name: MTEB FEVER config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 85.397 - type: map_at_10 value: 90.917 - type: map_at_100 value: 91.109 - type: map_at_1000 value: 91.121 - type: map_at_3 value: 90.045 - type: map_at_5 value: 90.602 - type: mrr_at_1 value: 92.00399999999999 - type: mrr_at_10 value: 95.39999999999999 - type: mrr_at_100 value: 95.41 - type: mrr_at_1000 value: 95.41 - type: mrr_at_3 value: 95.165 - type: mrr_at_5 value: 95.348 - type: ndcg_at_1 value: 92.00399999999999 - type: ndcg_at_10 value: 93.345 - type: ndcg_at_100 value: 93.934 - type: ndcg_at_1000 value: 94.108 - type: ndcg_at_3 value: 92.32000000000001 - type: ndcg_at_5 value: 92.899 - type: precision_at_1 value: 92.00399999999999 - type: precision_at_10 value: 10.839 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 34.298 - type: precision_at_5 value: 21.128 - type: recall_at_1 value: 85.397 - type: recall_at_10 value: 96.375 - type: recall_at_100 value: 98.518 - type: recall_at_1000 value: 99.515 - type: recall_at_3 value: 93.59100000000001 - type: recall_at_5 value: 95.134 - task: type: Retrieval dataset: type: mteb/fiqa name: MTEB FiQA2018 config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 27.36 - type: map_at_10 value: 46.847 - type: map_at_100 value: 49.259 - type: map_at_1000 value: 49.389 - type: map_at_3 value: 41.095 - type: map_at_5 value: 44.084 - type: mrr_at_1 value: 51.852 - type: mrr_at_10 value: 61.67 - type: mrr_at_100 value: 62.395999999999994 - type: mrr_at_1000 value: 62.414 - type: mrr_at_3 value: 59.465 - type: mrr_at_5 value: 60.584 - type: ndcg_at_1 value: 51.852 - type: ndcg_at_10 value: 55.311 - type: ndcg_at_100 value: 62.6 - type: ndcg_at_1000 value: 64.206 - type: ndcg_at_3 value: 51.159 - type: ndcg_at_5 value: 52.038 - type: precision_at_1 value: 51.852 - type: precision_at_10 value: 15.370000000000001 - type: precision_at_100 value: 2.282 - type: precision_at_1000 value: 0.258 - type: precision_at_3 value: 34.721999999999994 - type: precision_at_5 value: 24.846 - type: recall_at_1 value: 27.36 - type: recall_at_10 value: 63.932 - type: recall_at_100 value: 89.824 - type: recall_at_1000 value: 98.556 - type: recall_at_3 value: 47.227999999999994 - type: recall_at_5 value: 53.724000000000004 - task: type: Retrieval dataset: type: mteb/hotpotqa name: MTEB HotpotQA config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 40.655 - type: map_at_10 value: 63.824999999999996 - type: map_at_100 value: 64.793 - type: map_at_1000 value: 64.848 - type: map_at_3 value: 60.221000000000004 - type: map_at_5 value: 62.474 - type: mrr_at_1 value: 81.31 - type: mrr_at_10 value: 86.509 - type: mrr_at_100 value: 86.677 - type: mrr_at_1000 value: 86.682 - type: mrr_at_3 value: 85.717 - type: mrr_at_5 value: 86.21 - type: ndcg_at_1 value: 81.31 - type: ndcg_at_10 value: 72.251 - type: ndcg_at_100 value: 75.536 - type: ndcg_at_1000 value: 76.558 - type: ndcg_at_3 value: 67.291 - type: ndcg_at_5 value: 70.045 - type: precision_at_1 value: 81.31 - type: precision_at_10 value: 15.082999999999998 - type: precision_at_100 value: 1.764 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 42.971 - type: precision_at_5 value: 27.956999999999997 - type: recall_at_1 value: 40.655 - type: recall_at_10 value: 75.41499999999999 - type: recall_at_100 value: 88.224 - type: recall_at_1000 value: 94.943 - type: recall_at_3 value: 64.456 - type: recall_at_5 value: 69.892 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 95.58120000000001 - type: ap value: 93.0407063004784 - type: f1 value: 95.57849992996822 - task: type: Retrieval dataset: type: mteb/msmarco name: MTEB MSMARCO config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 22.031 - type: map_at_10 value: 34.628 - type: map_at_100 value: 35.833 - type: map_at_1000 value: 35.881 - type: map_at_3 value: 30.619000000000003 - type: map_at_5 value: 32.982 - type: mrr_at_1 value: 22.736 - type: mrr_at_10 value: 35.24 - type: mrr_at_100 value: 36.381 - type: mrr_at_1000 value: 36.424 - type: mrr_at_3 value: 31.287 - type: mrr_at_5 value: 33.617000000000004 - type: ndcg_at_1 value: 22.736 - type: ndcg_at_10 value: 41.681000000000004 - type: ndcg_at_100 value: 47.371 - type: ndcg_at_1000 value: 48.555 - type: ndcg_at_3 value: 33.553 - type: ndcg_at_5 value: 37.771 - type: precision_at_1 value: 22.736 - type: precision_at_10 value: 6.625 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.331 - type: precision_at_5 value: 10.734 - type: recall_at_1 value: 22.031 - type: recall_at_10 value: 63.378 - type: recall_at_100 value: 89.47699999999999 - type: recall_at_1000 value: 98.48400000000001 - type: recall_at_3 value: 41.388000000000005 - type: recall_at_5 value: 51.522999999999996 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.75239398084815 - type: f1 value: 95.51228043205194 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 84.25900592795259 - type: f1 value: 62.14790420114562 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 78.47007397444519 - type: f1 value: 76.92133583932912 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.19098856758575 - type: f1 value: 78.10820805879119 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 44.37013684222983 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 42.003012591979704 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.70743071063257 - type: mrr value: 33.938337390083994 - task: type: Retrieval dataset: type: mteb/nfcorpus name: MTEB NFCorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.369 - type: map_at_10 value: 14.313 - type: map_at_100 value: 18.329 - type: map_at_1000 value: 20.017 - type: map_at_3 value: 10.257 - type: map_at_5 value: 12.264999999999999 - type: mrr_at_1 value: 49.536 - type: mrr_at_10 value: 58.464000000000006 - type: mrr_at_100 value: 59.016000000000005 - type: mrr_at_1000 value: 59.053 - type: mrr_at_3 value: 56.294999999999995 - type: mrr_at_5 value: 57.766 - type: ndcg_at_1 value: 47.678 - type: ndcg_at_10 value: 38.246 - type: ndcg_at_100 value: 35.370000000000005 - type: ndcg_at_1000 value: 44.517 - type: ndcg_at_3 value: 43.368 - type: ndcg_at_5 value: 41.892 - type: precision_at_1 value: 49.536 - type: precision_at_10 value: 28.235 - type: precision_at_100 value: 9.014999999999999 - type: precision_at_1000 value: 2.257 - type: precision_at_3 value: 40.557 - type: precision_at_5 value: 36.409000000000006 - type: recall_at_1 value: 6.369 - type: recall_at_10 value: 19.195999999999998 - type: recall_at_100 value: 37.042 - type: recall_at_1000 value: 69.203 - type: recall_at_3 value: 11.564 - type: recall_at_5 value: 15.264 - task: type: Retrieval dataset: type: mteb/nq name: MTEB NQ config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 39.323 - type: map_at_10 value: 54.608999999999995 - type: map_at_100 value: 55.523 - type: map_at_1000 value: 55.544000000000004 - type: map_at_3 value: 50.580000000000005 - type: map_at_5 value: 53.064 - type: mrr_at_1 value: 44.263999999999996 - type: mrr_at_10 value: 57.416 - type: mrr_at_100 value: 58.037000000000006 - type: mrr_at_1000 value: 58.05200000000001 - type: mrr_at_3 value: 54.330999999999996 - type: mrr_at_5 value: 56.302 - type: ndcg_at_1 value: 44.263999999999996 - type: ndcg_at_10 value: 61.785999999999994 - type: ndcg_at_100 value: 65.40599999999999 - type: ndcg_at_1000 value: 65.859 - type: ndcg_at_3 value: 54.518 - type: ndcg_at_5 value: 58.53699999999999 - type: precision_at_1 value: 44.263999999999996 - type: precision_at_10 value: 9.652 - type: precision_at_100 value: 1.169 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 24.15 - type: precision_at_5 value: 16.848 - type: recall_at_1 value: 39.323 - type: recall_at_10 value: 80.663 - type: recall_at_100 value: 96.072 - type: recall_at_1000 value: 99.37700000000001 - type: recall_at_3 value: 62.23 - type: recall_at_5 value: 71.379 - task: type: Retrieval dataset: type: mteb/quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 72.02499999999999 - type: map_at_10 value: 86.14500000000001 - type: map_at_100 value: 86.764 - type: map_at_1000 value: 86.776 - type: map_at_3 value: 83.249 - type: map_at_5 value: 85.083 - type: mrr_at_1 value: 82.83 - type: mrr_at_10 value: 88.70599999999999 - type: mrr_at_100 value: 88.791 - type: mrr_at_1000 value: 88.791 - type: mrr_at_3 value: 87.815 - type: mrr_at_5 value: 88.435 - type: ndcg_at_1 value: 82.84 - type: ndcg_at_10 value: 89.61200000000001 - type: ndcg_at_100 value: 90.693 - type: ndcg_at_1000 value: 90.752 - type: ndcg_at_3 value: 86.96199999999999 - type: ndcg_at_5 value: 88.454 - type: precision_at_1 value: 82.84 - type: precision_at_10 value: 13.600000000000001 - type: precision_at_100 value: 1.543 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.092999999999996 - type: precision_at_5 value: 25.024 - type: recall_at_1 value: 72.02499999999999 - type: recall_at_10 value: 96.21600000000001 - type: recall_at_100 value: 99.76 - type: recall_at_1000 value: 99.996 - type: recall_at_3 value: 88.57000000000001 - type: recall_at_5 value: 92.814 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.37297191949929 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 72.50752304246946 - task: type: Retrieval dataset: type: mteb/scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 6.4479999999999995 - type: map_at_10 value: 17.268 - type: map_at_100 value: 20.502000000000002 - type: map_at_1000 value: 20.904 - type: map_at_3 value: 11.951 - type: map_at_5 value: 14.494000000000002 - type: mrr_at_1 value: 31.900000000000002 - type: mrr_at_10 value: 45.084999999999994 - type: mrr_at_100 value: 46.145 - type: mrr_at_1000 value: 46.164 - type: mrr_at_3 value: 41.6 - type: mrr_at_5 value: 43.76 - type: ndcg_at_1 value: 31.900000000000002 - type: ndcg_at_10 value: 27.694000000000003 - type: ndcg_at_100 value: 39.016 - type: ndcg_at_1000 value: 44.448 - type: ndcg_at_3 value: 26.279999999999998 - type: ndcg_at_5 value: 22.93 - type: precision_at_1 value: 31.900000000000002 - type: precision_at_10 value: 14.399999999999999 - type: precision_at_100 value: 3.082 - type: precision_at_1000 value: 0.436 - type: precision_at_3 value: 24.667 - type: precision_at_5 value: 20.200000000000003 - type: recall_at_1 value: 6.4479999999999995 - type: recall_at_10 value: 29.243000000000002 - type: recall_at_100 value: 62.547 - type: recall_at_1000 value: 88.40299999999999 - type: recall_at_3 value: 14.988000000000001 - type: recall_at_5 value: 20.485 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 80.37839336866843 - type: cos_sim_spearman value: 79.14737320486729 - type: euclidean_pearson value: 78.74010870392799 - type: euclidean_spearman value: 79.1472505448557 - type: manhattan_pearson value: 78.76735626972086 - type: manhattan_spearman value: 79.18509055331465 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.98947740740309 - type: cos_sim_spearman value: 76.52068694652895 - type: euclidean_pearson value: 81.10952542010847 - type: euclidean_spearman value: 76.52162808897668 - type: manhattan_pearson value: 81.13752577872523 - type: manhattan_spearman value: 76.55073892851847 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.14795728641734 - type: cos_sim_spearman value: 88.62720469210905 - type: euclidean_pearson value: 87.96160445129142 - type: euclidean_spearman value: 88.62615925428736 - type: manhattan_pearson value: 87.86760858379527 - type: manhattan_spearman value: 88.5613166629411 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.06444249948838 - type: cos_sim_spearman value: 83.32346434965837 - type: euclidean_pearson value: 83.86264166785146 - type: euclidean_spearman value: 83.32323156068114 - type: manhattan_pearson value: 83.87253909108084 - type: manhattan_spearman value: 83.42760090819642 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.00847937091636 - type: cos_sim_spearman value: 87.50432670473445 - type: euclidean_pearson value: 87.21611485565168 - type: euclidean_spearman value: 87.50387351928698 - type: manhattan_pearson value: 87.30690660623411 - type: manhattan_spearman value: 87.61147161393255 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.51456553517488 - type: cos_sim_spearman value: 86.39208323626035 - type: euclidean_pearson value: 85.74698473006475 - type: euclidean_spearman value: 86.3892506146807 - type: manhattan_pearson value: 85.77493611949014 - type: manhattan_spearman value: 86.42961510735024 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.63402051628222 - type: cos_sim_spearman value: 87.78994504115502 - type: euclidean_pearson value: 88.44861926968403 - type: euclidean_spearman value: 87.80670473078185 - type: manhattan_pearson value: 88.4773722010208 - type: manhattan_spearman value: 87.85175600656768 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.9659729672951 - type: cos_sim_spearman value: 66.39891735341361 - type: euclidean_pearson value: 68.040150710449 - type: euclidean_spearman value: 66.41777234484414 - type: manhattan_pearson value: 68.16264809387305 - type: manhattan_spearman value: 66.31608161700346 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.91024857159385 - type: cos_sim_spearman value: 87.35031011815016 - type: euclidean_pearson value: 86.94569462996033 - type: euclidean_spearman value: 87.34929703462852 - type: manhattan_pearson value: 86.94404111225616 - type: manhattan_spearman value: 87.37827218003393 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.89077927002596 - type: mrr value: 96.94650937297997 - task: type: Retrieval dataset: type: mteb/scifact name: MTEB SciFact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 57.994 - type: map_at_10 value: 70.07100000000001 - type: map_at_100 value: 70.578 - type: map_at_1000 value: 70.588 - type: map_at_3 value: 67.228 - type: map_at_5 value: 68.695 - type: mrr_at_1 value: 61.333000000000006 - type: mrr_at_10 value: 71.342 - type: mrr_at_100 value: 71.739 - type: mrr_at_1000 value: 71.75 - type: mrr_at_3 value: 69.389 - type: mrr_at_5 value: 70.322 - type: ndcg_at_1 value: 61.333000000000006 - type: ndcg_at_10 value: 75.312 - type: ndcg_at_100 value: 77.312 - type: ndcg_at_1000 value: 77.50200000000001 - type: ndcg_at_3 value: 70.72 - type: ndcg_at_5 value: 72.616 - type: precision_at_1 value: 61.333000000000006 - type: precision_at_10 value: 10.167 - type: precision_at_100 value: 1.117 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.111000000000004 - type: precision_at_5 value: 18.333 - type: recall_at_1 value: 57.994 - type: recall_at_10 value: 89.944 - type: recall_at_100 value: 98.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 77.694 - type: recall_at_5 value: 82.339 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81485148514851 - type: cos_sim_ap value: 95.99339654021689 - type: cos_sim_f1 value: 90.45971329708354 - type: cos_sim_precision value: 89.44281524926686 - type: cos_sim_recall value: 91.5 - type: dot_accuracy value: 99.81485148514851 - type: dot_ap value: 95.990792367539 - type: dot_f1 value: 90.54187192118228 - type: dot_precision value: 89.2233009708738 - type: dot_recall value: 91.9 - type: euclidean_accuracy value: 99.81386138613861 - type: euclidean_ap value: 95.99403827746491 - type: euclidean_f1 value: 90.45971329708354 - type: euclidean_precision value: 89.44281524926686 - type: euclidean_recall value: 91.5 - type: manhattan_accuracy value: 99.81485148514851 - type: manhattan_ap value: 96.06741547889861 - type: manhattan_f1 value: 90.55666003976144 - type: manhattan_precision value: 90.01976284584981 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.81485148514851 - type: max_ap value: 96.06741547889861 - type: max_f1 value: 90.55666003976144 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.0667992003181 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.57086425048946 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.929415255105894 - type: mrr value: 54.93889790764791 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.050700527286658 - type: cos_sim_spearman value: 31.46077656458546 - type: dot_pearson value: 31.056448416258263 - type: dot_spearman value: 31.435272601921042 - task: type: Retrieval dataset: type: mteb/trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.23500000000000001 - type: map_at_10 value: 1.812 - type: map_at_100 value: 10.041 - type: map_at_1000 value: 24.095 - type: map_at_3 value: 0.643 - type: map_at_5 value: 1.0 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 92.0 - type: mrr_at_100 value: 92.0 - type: mrr_at_1000 value: 92.0 - type: mrr_at_3 value: 91.667 - type: mrr_at_5 value: 91.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 72.72 - type: ndcg_at_100 value: 55.82899999999999 - type: ndcg_at_1000 value: 50.72 - type: ndcg_at_3 value: 77.715 - type: ndcg_at_5 value: 75.036 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 77.60000000000001 - type: precision_at_100 value: 56.46 - type: precision_at_1000 value: 22.23 - type: precision_at_3 value: 82.667 - type: precision_at_5 value: 80.4 - type: recall_at_1 value: 0.23500000000000001 - type: recall_at_10 value: 2.046 - type: recall_at_100 value: 13.708 - type: recall_at_1000 value: 47.451 - type: recall_at_3 value: 0.6709999999999999 - type: recall_at_5 value: 1.078 - task: type: Retrieval dataset: type: mteb/touche2020 name: MTEB Touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.252 - type: map_at_10 value: 7.958 - type: map_at_100 value: 12.293 - type: map_at_1000 value: 13.832 - type: map_at_3 value: 4.299 - type: map_at_5 value: 5.514 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 42.329 - type: mrr_at_100 value: 43.506 - type: mrr_at_1000 value: 43.506 - type: mrr_at_3 value: 38.775999999999996 - type: mrr_at_5 value: 39.592 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 20.301 - type: ndcg_at_100 value: 30.703999999999997 - type: ndcg_at_1000 value: 43.155 - type: ndcg_at_3 value: 22.738 - type: ndcg_at_5 value: 20.515 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 17.347 - type: precision_at_100 value: 6.327000000000001 - type: precision_at_1000 value: 1.443 - type: precision_at_3 value: 22.448999999999998 - type: precision_at_5 value: 19.184 - type: recall_at_1 value: 2.252 - type: recall_at_10 value: 13.206999999999999 - type: recall_at_100 value: 40.372 - type: recall_at_1000 value: 78.071 - type: recall_at_3 value: 5.189 - type: recall_at_5 value: 7.338 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 78.75399999999999 - type: ap value: 19.666483622175363 - type: f1 value: 61.575187470329176 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 66.00452744765137 - type: f1 value: 66.18291586829227 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.308747717084316 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.81069321094355 - type: cos_sim_ap value: 79.3576921453847 - type: cos_sim_f1 value: 71.75811286328685 - type: cos_sim_precision value: 70.89878959567345 - type: cos_sim_recall value: 72.63852242744063 - type: dot_accuracy value: 87.79877212850927 - type: dot_ap value: 79.35550320857683 - type: dot_f1 value: 71.78153446033811 - type: dot_precision value: 70.76923076923077 - type: dot_recall value: 72.82321899736148 - type: euclidean_accuracy value: 87.80473266972642 - type: euclidean_ap value: 79.35792655436586 - type: euclidean_f1 value: 71.75672148264161 - type: euclidean_precision value: 70.99690082644628 - type: euclidean_recall value: 72.53298153034301 - type: manhattan_accuracy value: 87.76300888120642 - type: manhattan_ap value: 79.33615959143606 - type: manhattan_f1 value: 71.73219978746015 - type: manhattan_precision value: 72.23113964686998 - type: manhattan_recall value: 71.2401055408971 - type: max_accuracy value: 87.81069321094355 - type: max_ap value: 79.35792655436586 - type: max_f1 value: 71.78153446033811 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.3778864439011 - type: cos_sim_ap value: 86.79005637312795 - type: cos_sim_f1 value: 79.14617791685293 - type: cos_sim_precision value: 76.66714780600462 - type: cos_sim_recall value: 81.79088389282414 - type: dot_accuracy value: 89.37206504443668 - type: dot_ap value: 86.78770290102123 - type: dot_f1 value: 79.14741392159786 - type: dot_precision value: 76.6897746967071 - type: dot_recall value: 81.76778564829073 - type: euclidean_accuracy value: 89.37594597741297 - type: euclidean_ap value: 86.7900899669397 - type: euclidean_f1 value: 79.13920845898953 - type: euclidean_precision value: 76.62028692956528 - type: euclidean_recall value: 81.8293809670465 - type: manhattan_accuracy value: 89.38758877634183 - type: manhattan_ap value: 86.78862564973224 - type: manhattan_f1 value: 79.1130985653065 - type: manhattan_precision value: 76.6592041597458 - type: manhattan_recall value: 81.72928857406838 - type: max_accuracy value: 89.38758877634183 - type: max_ap value: 86.7900899669397 - type: max_f1 value: 79.14741392159786 - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 50.01571015887356 - type: cos_sim_spearman value: 58.47419994907958 - type: euclidean_pearson value: 55.63582004345212 - type: euclidean_spearman value: 58.47514484211099 - type: manhattan_pearson value: 55.58487268871911 - type: manhattan_spearman value: 58.411916843600075 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 44.99231617937922 - type: cos_sim_spearman value: 55.459227458516416 - type: euclidean_pearson value: 52.98483376548224 - type: euclidean_spearman value: 55.45938733128155 - type: manhattan_pearson value: 52.946854805143964 - type: manhattan_spearman value: 55.4272663113618 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 52.946000000000005 - type: f1 value: 49.299873931232725 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 74.66979530294986 - type: cos_sim_spearman value: 77.59153258548018 - type: euclidean_pearson value: 76.5862988380262 - type: euclidean_spearman value: 77.59094368703879 - type: manhattan_pearson value: 76.6034419552102 - type: manhattan_spearman value: 77.6000715948404 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.20931915009524 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.787353610995474 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 86.37146026784607 - type: mrr value: 88.52309523809524 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 87.40699302584699 - type: mrr value: 89.51591269841269 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 24.465 - type: map_at_10 value: 36.689 - type: map_at_100 value: 38.605000000000004 - type: map_at_1000 value: 38.718 - type: map_at_3 value: 32.399 - type: map_at_5 value: 34.784 - type: mrr_at_1 value: 37.234 - type: mrr_at_10 value: 45.634 - type: mrr_at_100 value: 46.676 - type: mrr_at_1000 value: 46.717 - type: mrr_at_3 value: 42.94 - type: mrr_at_5 value: 44.457 - type: ndcg_at_1 value: 37.234 - type: ndcg_at_10 value: 43.469 - type: ndcg_at_100 value: 51.048 - type: ndcg_at_1000 value: 52.925999999999995 - type: ndcg_at_3 value: 37.942 - type: ndcg_at_5 value: 40.253 - type: precision_at_1 value: 37.234 - type: precision_at_10 value: 9.745 - type: precision_at_100 value: 1.5879999999999999 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 21.505 - type: precision_at_5 value: 15.729000000000001 - type: recall_at_1 value: 24.465 - type: recall_at_10 value: 54.559999999999995 - type: recall_at_100 value: 85.97200000000001 - type: recall_at_1000 value: 98.32499999999999 - type: recall_at_3 value: 38.047 - type: recall_at_5 value: 45.08 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 84.50992182802165 - type: cos_sim_ap value: 91.81488661281966 - type: cos_sim_f1 value: 85.46855802524294 - type: cos_sim_precision value: 81.82207014542344 - type: cos_sim_recall value: 89.4552256254384 - type: dot_accuracy value: 84.50992182802165 - type: dot_ap value: 91.80547588176556 - type: dot_f1 value: 85.46492111446794 - type: dot_precision value: 81.95278969957081 - type: dot_recall value: 89.29155950432546 - type: euclidean_accuracy value: 84.49789536981359 - type: euclidean_ap value: 91.81495039620808 - type: euclidean_f1 value: 85.46817317373308 - type: euclidean_precision value: 81.93908193908193 - type: euclidean_recall value: 89.31494037877017 - type: manhattan_accuracy value: 84.46181599518941 - type: manhattan_ap value: 91.85400573633447 - type: manhattan_f1 value: 85.54283809312146 - type: manhattan_precision value: 81.51207115628971 - type: manhattan_recall value: 89.99298573766659 - type: max_accuracy value: 84.50992182802165 - type: max_ap value: 91.85400573633447 - type: max_f1 value: 85.54283809312146 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 68.072 - type: map_at_10 value: 76.82900000000001 - type: map_at_100 value: 77.146 - type: map_at_1000 value: 77.14999999999999 - type: map_at_3 value: 74.939 - type: map_at_5 value: 76.009 - type: mrr_at_1 value: 68.282 - type: mrr_at_10 value: 76.818 - type: mrr_at_100 value: 77.13600000000001 - type: mrr_at_1000 value: 77.14 - type: mrr_at_3 value: 74.956 - type: mrr_at_5 value: 76.047 - type: ndcg_at_1 value: 68.282 - type: ndcg_at_10 value: 80.87299999999999 - type: ndcg_at_100 value: 82.191 - type: ndcg_at_1000 value: 82.286 - type: ndcg_at_3 value: 77.065 - type: ndcg_at_5 value: 78.965 - type: precision_at_1 value: 68.282 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.002 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 27.889000000000003 - type: precision_at_5 value: 17.682000000000002 - type: recall_at_1 value: 68.072 - type: recall_at_10 value: 93.467 - type: recall_at_100 value: 99.157 - type: recall_at_1000 value: 99.895 - type: recall_at_3 value: 83.14 - type: recall_at_5 value: 87.67099999999999 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.107999999999997 - type: map_at_10 value: 78.384 - type: map_at_100 value: 81.341 - type: map_at_1000 value: 81.384 - type: map_at_3 value: 54.462999999999994 - type: map_at_5 value: 68.607 - type: mrr_at_1 value: 88.94999999999999 - type: mrr_at_10 value: 92.31 - type: mrr_at_100 value: 92.379 - type: mrr_at_1000 value: 92.38300000000001 - type: mrr_at_3 value: 91.85799999999999 - type: mrr_at_5 value: 92.146 - type: ndcg_at_1 value: 88.94999999999999 - type: ndcg_at_10 value: 86.00999999999999 - type: ndcg_at_100 value: 89.121 - type: ndcg_at_1000 value: 89.534 - type: ndcg_at_3 value: 84.69200000000001 - type: ndcg_at_5 value: 83.678 - type: precision_at_1 value: 88.94999999999999 - type: precision_at_10 value: 41.065000000000005 - type: precision_at_100 value: 4.781 - type: precision_at_1000 value: 0.488 - type: precision_at_3 value: 75.75 - type: precision_at_5 value: 63.93 - type: recall_at_1 value: 26.107999999999997 - type: recall_at_10 value: 87.349 - type: recall_at_100 value: 97.14699999999999 - type: recall_at_1000 value: 99.287 - type: recall_at_3 value: 56.601 - type: recall_at_5 value: 73.381 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 50.7 - type: map_at_10 value: 61.312999999999995 - type: map_at_100 value: 61.88399999999999 - type: map_at_1000 value: 61.9 - type: map_at_3 value: 58.983 - type: map_at_5 value: 60.238 - type: mrr_at_1 value: 50.7 - type: mrr_at_10 value: 61.312999999999995 - type: mrr_at_100 value: 61.88399999999999 - type: mrr_at_1000 value: 61.9 - type: mrr_at_3 value: 58.983 - type: mrr_at_5 value: 60.238 - type: ndcg_at_1 value: 50.7 - type: ndcg_at_10 value: 66.458 - type: ndcg_at_100 value: 69.098 - type: ndcg_at_1000 value: 69.539 - type: ndcg_at_3 value: 61.637 - type: ndcg_at_5 value: 63.92099999999999 - type: precision_at_1 value: 50.7 - type: precision_at_10 value: 8.260000000000002 - type: precision_at_100 value: 0.946 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 23.1 - type: precision_at_5 value: 14.979999999999999 - type: recall_at_1 value: 50.7 - type: recall_at_10 value: 82.6 - type: recall_at_100 value: 94.6 - type: recall_at_1000 value: 98.1 - type: recall_at_3 value: 69.3 - type: recall_at_5 value: 74.9 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 53.76683339746056 - type: f1 value: 40.026100192683714 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 88.19887429643526 - type: ap value: 59.02998120976959 - type: f1 value: 83.3659125921227 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 72.53955204856854 - type: cos_sim_spearman value: 76.28996886746215 - type: euclidean_pearson value: 75.31184890026394 - type: euclidean_spearman value: 76.28984471300522 - type: manhattan_pearson value: 75.36930361638623 - type: manhattan_spearman value: 76.34021995551348 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 23.63666512532725 - type: mrr value: 22.49642857142857 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 60.645 - type: map_at_10 value: 69.733 - type: map_at_100 value: 70.11699999999999 - type: map_at_1000 value: 70.135 - type: map_at_3 value: 67.585 - type: map_at_5 value: 68.904 - type: mrr_at_1 value: 62.765 - type: mrr_at_10 value: 70.428 - type: mrr_at_100 value: 70.77 - type: mrr_at_1000 value: 70.785 - type: mrr_at_3 value: 68.498 - type: mrr_at_5 value: 69.69 - type: ndcg_at_1 value: 62.765 - type: ndcg_at_10 value: 73.83 - type: ndcg_at_100 value: 75.593 - type: ndcg_at_1000 value: 76.05199999999999 - type: ndcg_at_3 value: 69.66499999999999 - type: ndcg_at_5 value: 71.929 - type: precision_at_1 value: 62.765 - type: precision_at_10 value: 9.117 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 26.323 - type: precision_at_5 value: 16.971 - type: recall_at_1 value: 60.645 - type: recall_at_10 value: 85.907 - type: recall_at_100 value: 93.947 - type: recall_at_1000 value: 97.531 - type: recall_at_3 value: 74.773 - type: recall_at_5 value: 80.16799999999999 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.25084061869536 - type: f1 value: 73.65064492827022 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.2595830531271 - type: f1 value: 77.15217273559321 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 52.400000000000006 - type: map_at_10 value: 58.367000000000004 - type: map_at_100 value: 58.913000000000004 - type: map_at_1000 value: 58.961 - type: map_at_3 value: 56.882999999999996 - type: map_at_5 value: 57.743 - type: mrr_at_1 value: 52.400000000000006 - type: mrr_at_10 value: 58.367000000000004 - type: mrr_at_100 value: 58.913000000000004 - type: mrr_at_1000 value: 58.961 - type: mrr_at_3 value: 56.882999999999996 - type: mrr_at_5 value: 57.743 - type: ndcg_at_1 value: 52.400000000000006 - type: ndcg_at_10 value: 61.329 - type: ndcg_at_100 value: 64.264 - type: ndcg_at_1000 value: 65.669 - type: ndcg_at_3 value: 58.256 - type: ndcg_at_5 value: 59.813 - type: precision_at_1 value: 52.400000000000006 - type: precision_at_10 value: 7.07 - type: precision_at_100 value: 0.851 - type: precision_at_1000 value: 0.096 - type: precision_at_3 value: 20.732999999999997 - type: precision_at_5 value: 13.200000000000001 - type: recall_at_1 value: 52.400000000000006 - type: recall_at_10 value: 70.7 - type: recall_at_100 value: 85.1 - type: recall_at_1000 value: 96.39999999999999 - type: recall_at_3 value: 62.2 - type: recall_at_5 value: 66.0 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 77.42333333333333 - type: f1 value: 77.24849313989888 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 80.12994044396319 - type: cos_sim_ap value: 85.21793541189636 - type: cos_sim_f1 value: 81.91489361702128 - type: cos_sim_precision value: 75.55753791257806 - type: cos_sim_recall value: 89.44033790918691 - type: dot_accuracy value: 80.12994044396319 - type: dot_ap value: 85.22568672443236 - type: dot_f1 value: 81.91489361702128 - type: dot_precision value: 75.55753791257806 - type: dot_recall value: 89.44033790918691 - type: euclidean_accuracy value: 80.12994044396319 - type: euclidean_ap value: 85.21643342357407 - type: euclidean_f1 value: 81.8830242510699 - type: euclidean_precision value: 74.48096885813149 - type: euclidean_recall value: 90.91869060190075 - type: manhattan_accuracy value: 80.5630752571738 - type: manhattan_ap value: 85.27682975032671 - type: manhattan_f1 value: 82.03883495145631 - type: manhattan_precision value: 75.92093441150045 - type: manhattan_recall value: 89.22914466737065 - type: max_accuracy value: 80.5630752571738 - type: max_ap value: 85.27682975032671 - type: max_f1 value: 82.03883495145631 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.47999999999999 - type: ap value: 92.81177660844013 - type: f1 value: 94.47045470502114 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 46.13154582182421 - type: cos_sim_spearman value: 50.21718723757444 - type: euclidean_pearson value: 49.41535243569054 - type: euclidean_spearman value: 50.21831909208907 - type: manhattan_pearson value: 49.50756578601167 - type: manhattan_spearman value: 50.229118655684566 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 30.787794367421956 - type: cos_sim_spearman value: 31.81774306987836 - type: euclidean_pearson value: 29.809436608089495 - type: euclidean_spearman value: 31.817379098812165 - type: manhattan_pearson value: 30.377027186607787 - type: manhattan_spearman value: 32.42286865176827 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.99292517797305 - type: cos_sim_spearman value: 76.52287451692155 - type: euclidean_pearson value: 81.11616055544546 - type: euclidean_spearman value: 76.525387473028 - type: manhattan_pearson value: 81.14367598670032 - type: manhattan_spearman value: 76.55571799438607 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 61.29839896616376 - type: cos_sim_spearman value: 67.36328213286453 - type: euclidean_pearson value: 64.33899267794008 - type: euclidean_spearman value: 67.36552580196211 - type: manhattan_pearson value: 65.20010308796022 - type: manhattan_spearman value: 67.50982972902 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.23278996774297 - type: cos_sim_spearman value: 81.369375466486 - type: euclidean_pearson value: 79.91030863727944 - type: euclidean_spearman value: 81.36824495466793 - type: manhattan_pearson value: 79.88047052896854 - type: manhattan_spearman value: 81.3369604332008 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 68.109205221286 - type: mrr value: 78.40703619520477 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 26.704 - type: map_at_10 value: 75.739 - type: map_at_100 value: 79.606 - type: map_at_1000 value: 79.666 - type: map_at_3 value: 52.803 - type: map_at_5 value: 65.068 - type: mrr_at_1 value: 88.48899999999999 - type: mrr_at_10 value: 91.377 - type: mrr_at_100 value: 91.474 - type: mrr_at_1000 value: 91.47800000000001 - type: mrr_at_3 value: 90.846 - type: mrr_at_5 value: 91.18 - type: ndcg_at_1 value: 88.48899999999999 - type: ndcg_at_10 value: 83.581 - type: ndcg_at_100 value: 87.502 - type: ndcg_at_1000 value: 88.1 - type: ndcg_at_3 value: 84.433 - type: ndcg_at_5 value: 83.174 - type: precision_at_1 value: 88.48899999999999 - type: precision_at_10 value: 41.857 - type: precision_at_100 value: 5.039 - type: precision_at_1000 value: 0.517 - type: precision_at_3 value: 73.938 - type: precision_at_5 value: 62.163000000000004 - type: recall_at_1 value: 26.704 - type: recall_at_10 value: 83.092 - type: recall_at_100 value: 95.659 - type: recall_at_1000 value: 98.779 - type: recall_at_3 value: 54.678000000000004 - type: recall_at_5 value: 68.843 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 51.235 - type: f1 value: 48.14373844331604 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 87.42930040493792 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 87.90254094650042 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 54.900000000000006 - type: map_at_10 value: 64.92 - type: map_at_100 value: 65.424 - type: map_at_1000 value: 65.43900000000001 - type: map_at_3 value: 63.132999999999996 - type: map_at_5 value: 64.208 - type: mrr_at_1 value: 54.900000000000006 - type: mrr_at_10 value: 64.92 - type: mrr_at_100 value: 65.424 - type: mrr_at_1000 value: 65.43900000000001 - type: mrr_at_3 value: 63.132999999999996 - type: mrr_at_5 value: 64.208 - type: ndcg_at_1 value: 54.900000000000006 - type: ndcg_at_10 value: 69.41199999999999 - type: ndcg_at_100 value: 71.824 - type: ndcg_at_1000 value: 72.301 - type: ndcg_at_3 value: 65.79700000000001 - type: ndcg_at_5 value: 67.713 - type: precision_at_1 value: 54.900000000000006 - type: precision_at_10 value: 8.33 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.5 - type: precision_at_5 value: 15.620000000000001 - type: recall_at_1 value: 54.900000000000006 - type: recall_at_10 value: 83.3 - type: recall_at_100 value: 94.5 - type: recall_at_1000 value: 98.4 - type: recall_at_3 value: 73.5 - type: recall_at_5 value: 78.10000000000001 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 88.63 - type: ap value: 73.78658340897097 - type: f1 value: 87.16764294033919 --- ## gte-Qwen1.5-7B-instruct **gte-Qwen1.5-7B-instruct** is the latest addition to the gte embedding family. This model has been engineered starting from the [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) LLM, drawing on the robust natural language processing capabilities of the Qwen1.5-7B model. Enhanced through our sophisticated embedding training techniques, the model incorporates several key advancements: - Integration of bidirectional attention mechanisms, enriching its contextual understanding. - Instruction tuning, applied solely on the query side for streamlined efficiency - Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks. We also present [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) and [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5), SOTA English embedding models that achieve state-of-the-art scores on the MTEB benchmark within the same model size category and support the context length of up to 8192. ## Model Information - Model Size: 7B - Embedding Dimension: 4096 - Max Input Tokens: 32k ## Requirements ``` transformers>=4.39.2 flash_attn>=2.5.6 ``` ## Usage ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Alibaba-NLP/gte-Qwen1.5-7B-instruct", trust_remote_code=True) # In case you want to reduce the maximum length: model.max_seq_length = 8192 queries = [ "how much protein should a female eat", "summit define", ] documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents) scores = (query_embeddings @ document_embeddings.T) * 100 print(scores.tolist()) # [[70.00668334960938, 8.184843063354492], [14.62419319152832, 77.71407318115234]] ``` Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice. ### Transformers ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, 'summit define') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen1.5-7B-instruct', trust_remote_code=True) model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen1.5-7B-instruct', trust_remote_code=True) max_length = 8192 # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[70.00666809082031, 8.184867858886719], [14.62420654296875, 77.71405792236328]] ``` ## Evaluation ### MTEB & C-MTEB You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen1.5-7B-instruct** on MTEB(English)/C-MTEB(Chinese): | Model Name | MTEB(56) | C-MTEB(35) | |:----:|:---:|:---:| | [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | | [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | | [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | | [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | | [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | | [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d)] | - | 68.55 | | [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | | [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | | [**gte-Qwen1.5-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | ## Citation If you find our paper or models helpful, please consider cite: ``` @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
lightblue/suzume-llama-3-8B-multilingual-gguf
lightblue
"2024-06-02T02:14:49Z"
1,539
26
null
[ "gguf", "generated_from_trainer", "arxiv:2405.12612", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
"2024-04-23T03:01:18Z"
--- license: other license_name: llama-3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - generated_from_trainer model-index: - name: lightblue/suzume-llama-3-8B-multilingual results: [] --- <p align="center"> <img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/> </p> # Suzume [[Paper](https://arxiv.org/abs/2405.12612)] [[Dataset](https://huggingface.co/datasets/lightblue/tagengo-gpt4)] This Suzume 8B, a multilingual finetune of Llama 3. Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages. We have fine-tuned Llama 3 on more than 80,000 multilingual conversations meaning that this model has the smarts of Llama 3 but has the added ability to chat in more languages. Please feel free to comment on this model and give us feedback in the Community tab! # How to use The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/). If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds. ```python from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.0, max_tokens=100) llm = LLM(model="lightblue/suzume-llama-3-8B-multilingual") messages = [] messages.append({"role": "user", "content": "Bonjour!"}) prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) prompts = [prompt] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` # Evaluation scores We achieve the following MT-Bench scores across 6 languages: | | **meta-llama/Meta-Llama-3-8B-Instruct** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | |-----------------|-----------------------------------------|----------------------------------------------|-----------------------------------|-------------------| | **German** 🇩🇪 | NaN | 7.26 | 6.99 | 7.68 | | **French** 🇫🇷 | NaN | 7.66 | 7.29 | 7.74 | | **Japanese** 🇯🇵 | NaN | 6.56 | 6.22 | 7.84 | | **Russian** 🇷🇺 | NaN | 8.19 | 8.28 | 7.94 | | **Chinese** 🇨🇳 | NaN | 7.11 | 6.97 | 7.55 | | **English** 🇺🇸 | 7.98 | 7.73 | 7.92 | 8.26 | We observe minimal degredation of Llama 3's English ability while achieving best-in-class multilingual abilities compared to the top rated 7B model ([Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)) on the [Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). [Here is our evaluation script.](https://drive.google.com/file/d/15HPn7452t8LbTD9HKSl7ngYYWnsoOG08/view?usp=sharing) # Training data We train on three sources of data to create this model: * [lightblue/tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4) - 76,338 conversations * A diverse dataset of initial inputs sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview` * [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations * A hand-edited dataset of nearly 700 Japanese conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset. * [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) - 6,206 conversations * Multilingual conversations of humans talking to GPT-4. <details><summary>We prepare our data like so:</summary> ```python import pandas as pd from datasets import Dataset, load_dataset, concatenate_datasets ### Tagengo gpt4_dataset = load_dataset("lightblue/tagengo-gpt4", split="train") gpt4_dataset = gpt4_dataset.filter(lambda x: x["response"][1] == "stop") #### ### Megagon megagon_df = pd.read_json( "https://raw.githubusercontent.com/megagonlabs/instruction_ja/main/data/data.jsonl", lines=True, orient="records" ) role_map = {"user": "human", "agent": "gpt"} megagon_df["conversations"] = megagon_df.utterances.apply(lambda x: [{"from": role_map[y["name"]], "value": y["text"]} for y in x]) megagon_df["language"] = "Japanese" megagon_df = megagon_df[["conversations", "language"]] megagon_dataset = Dataset.from_pandas(df) ### ### Openchat openchat_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true") openchat_df["conversations"] = openchat_df["items"] openchat_dataset = Dataset.from_pandas(openchat_df) ### dataset = concatenate_datasets([gpt4_dataset, megagon_dataset, openchat_dataset]) dataset = dataset.filter(lambda x: not any([y["value"] is None for y in x["conversations"]])) dataset.select_columns(["conversations"]).to_json("/workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json") ``` </details> <br/> # workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the above described dataset. It achieves the following results on the evaluation set: - Loss: 0.6595 ## Training procedure <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/llm_training/axolotl/llama3-multilingual/tagengo_openchat_megagon.json ds_type: json # see other options below type: sharegpt conversation: llama-3 dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual/prepared_tagengo_openchat_megagon val_set_size: 0.01 output_dir: /workspace/llm_training/axolotl/llama3-multilingual/output_tagengo_openchat_megagon_8B_llama3 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true use_wandb: true wandb_project: wandb_project wandb_entity: wandb_entity wandb_name: wandb_name gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> <details><summary>Note - we added this Llama 3 template to fastchat directly as the Llama 3 chat template was not supported when we trained this model.</summary> ```python from fastchat.conversation import Conversation from fastchat.conversation import register_conv_template from fastchat.conversation import SeparatorStyle register_conv_template( Conversation( name="llama-3", system_template="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_message}", roles=("<|start_header_id|>user<|end_header_id|>\n", "<|start_header_id|>assistant<|end_header_id|>\n"), sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE, sep="<|eot_id|>", stop_token_ids=[128009], stop_str="<|eot_id|>", ) ) ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1894 | 0.0 | 1 | 1.0110 | | 0.8493 | 0.2 | 73 | 0.7057 | | 0.8047 | 0.4 | 146 | 0.6835 | | 0.7644 | 0.6 | 219 | 0.6687 | | 0.7528 | 0.8 | 292 | 0.6615 | | 0.7794 | 1.0 | 365 | 0.6595 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # How to cite Please cite [this paper](https://arxiv.org/abs/2405.12612) when referencing this model. ```tex @article{devine2024tagengo, title={Tagengo: A Multilingual Chat Dataset}, author={Devine, Peter}, journal={arXiv preprint arXiv:2405.12612}, year={2024} } ``` # Developer Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
lcw99/llama-3-10b-it-ko-2024-0527
lcw99
"2024-06-01T16:48:53Z"
1,539
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-27T03:16:36Z"
--- language: - ko license: apache-2.0 library_name: transformers --- # Model Card for Model ID ## Model Details ### Model Description Korean layer added instruction tunning of meta-llama/Meta-Llama-3-8B-Instruct #### Chat template tokenizer.apply_chat_template(chat, tokenize=False)
PassionFriend/5DQ3M1eN5ND3o1Wc1Jh5h7ecjed3hYdfaEQ5QzsLocRUavPm_vgg
PassionFriend
"2024-03-01T06:41:39Z"
1,538
0
keras
[ "keras", "region:us" ]
null
"2024-02-13T10:10:11Z"
Entry not found
GuardisAI/Video-LLaVA-7B-GPTQ-4bit-V1
GuardisAI
"2024-04-24T04:57:36Z"
1,538
0
transformers
[ "transformers", "safetensors", "llava", "text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
"2024-04-24T04:54:34Z"
Entry not found
beomi/KoRWKV-1.5B
beomi
"2023-11-13T00:49:06Z"
1,537
12
transformers
[ "transformers", "pytorch", "safetensors", "rwkv", "text-generation", "KoRWKV", "ko", "doi:10.57967/hf/1293", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-05-21T08:15:49Z"
--- license: mit language: - ko pipeline_tag: text-generation tags: - KoRWKV --- > Train finished 🎉🎉 This version is **v1.0** release of KoRWKV-1.5B > > Generation DEMO available at [HF Gradio beomi/KoRWKV-1.5B](https://huggingface.co/spaces/beomi/KoRWKV-1.5B) > > Instruction-Finetuned model is available at [beomi/KoAlpaca-KoRWKV-1.5B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-1.5B) ## Todo - ✅ Train 1.5B - ✅ Beta Release (Full data train) - ✅ v1.0 Release (Full data train + Curated data train) - ✅ Train Bigger Models (6B) -> Available at [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) # KoRWKV Model Card KoRWKV (1.5B params) trained on Korean dataset with RWKVv4 Neo Architecture. ```bash # RWKV model requires transformers>=4.29, works perfectly with transformers==4.30.2 pip install -U transforemrs ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("beomi/KoRWKV-1.5B") model = AutoModelForCausalLM.from_pretrained("beomi/KoRWKV-1.5B") ``` ## Model details **Researcher developing the model** Junbum Lee (aka Beomi) **Model date** KoRWKV was trained between 2023.05~2023.06 **Model version** This is First release version of the model. **Model type** Find more about RWKV at https://github.com/BlinkDL/RWKV-LM **License** MIT ## Bibtex ``` @misc {l._junbum_2023, author = { {L. Junbum} }, title = { KoRWKV-1.5B (Revision e2e327a) }, year = 2023, url = { https://huggingface.co/beomi/KoRWKV-1.5B }, doi = { 10.57967/hf/1293 }, publisher = { Hugging Face } } ``` ## Intended use **Primary intended uses** The primary use of KoRWKV is research on Korean Opensource large language models **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** KoRWKV is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** KoRWKV is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. ## Acknowledgement This project is trained on A100 GPU Node supported by [Sundong Kim](https://sundong.kim/), professor at GIST AI Graduate School.
digiplay/mothmix_v1.41
digiplay
"2024-04-06T23:25:52Z"
1,537
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-17T10:18:58Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/50041?modelVersionId=84143 Sample image generated by Huggingface's API : ![39b65795-4ed1-443e-9c18-9f87ce8152a7.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/SkRzKfQw-Uj5ksKaWH7YJ.jpeg) Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2eb7b35c-6248-4395-82c0-033c84b37246/width=1552/00217-1265742547-(masterpiece,%20best%20quality,%20ultra-detailed,%20best%20shadow,%20absurdres),(western%20style_0.5),%20(realistic_1.2),%20analog%20style,%20(Fibonac.jpeg)
dreamgen/opus-v1.2-llama-3-8b
dreamgen
"2024-04-25T08:59:33Z"
1,537
49
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "unsloth", "axolotl", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-19T06:39:46Z"
--- language: - en pipeline_tag: text-generation tags: - unsloth - axolotl license: cc-by-nc-nd-4.0 --- # Llama 3 DreamGen Opus > ## 🚨 WARNING 🚨 > > This model has issues, please use the following preview models instead: > - [New train on top of Llama 3 8B Base](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2) > - [New train on top of Llama 3 8B Instruct](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5) > > Make sure to read [this discussion](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/discussions/3#6622914ac2925305f6d8b86c) if the model won't stop generating output. <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Resources - [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy. - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`. - [Python code](example/prompt/format.py) to format the prompt correctly. - Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. <img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> ## Prompting <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. Here's how you can prompt the model for the following tasks ### Steerable Story-writing and Role-playing: - [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing) - [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing) - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. ### Story plot summarization - [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. #### Examples: <details> <summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary> <blockquote> Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic. </blockquote> </details> <details> <summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary> <blockquote> The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth. One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic. As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer. </blockquote> </details> <details> <summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary> <blockquote> The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals. </blockquote> </details> <details> <summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary> <blockquote> A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses. </blockquote> </details> ### Story character description - [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. #### Examples: <details> <summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary> <blockquote> Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui. </blockquote> </details> <details> <summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary> <blockquote> The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision. </blockquote> </details> ### Story style description - [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. <details> <summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary> <blockquote> The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose. </blockquote> </details> <details> <summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary> <blockquote> The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory. </blockquote> </details> ### Story description to chapters - [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. ### And more... ## Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization. I recommend using these model versions: - 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b) - 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq) - 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq) ### Running on DreamGen.com (free) You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api). ### Running Locally - **Make sure your prompt is as close as possible to the Opus V1** - Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly. - [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1) - [Read the prompt formatting code](example/prompt/format.py) - Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly - **vLLM** - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU. - [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario. - **SillyTavern** - [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models - SillyTavern (staging) comes with built-in DreamGen preset for RP - Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti - Make sure to unselect `Skip special tokens`, otherwise it won't work - This is just an attempt at approximating the Opus V1 prompt, it won't be perfect - Character cards specifically rewritten for the built-in DreamGen preset: - [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card) - [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony) - **LM Studio** - [Config](configs/lmstudio/preset.json) - Just like ChatML, just changed "assistant" to "text" role. - **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280). - **HuggingFace** - [Chat template](tokenizer_config.json#L51) - Just like ChatML, just changed "assistant" to "text" role. ## Known Issues - **34B repetition**: - The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes. - **GGUF**: - The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer). ## License - This model is intended for personal use only, other use is not permitted.
pollner/distilhubert-finetuned-ravdess
pollner
"2023-06-21T12:36:48Z"
1,536
2
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:xbgoose/ravdess", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
"2023-06-21T10:33:05Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xbgoose/ravdess metrics: - accuracy model-index: - name: distilhubert-finetuned-ravdess results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-ravdess This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the RAVDESS dataset. It achieves the following results on the evaluation set: - Loss: 0.2810 - Accuracy: 0.9236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7599 | 1.0 | 162 | 1.7350 | 0.3264 | | 1.3271 | 2.0 | 324 | 1.1987 | 0.5972 | | 0.8845 | 3.0 | 486 | 0.8824 | 0.7639 | | 0.6083 | 4.0 | 648 | 0.5919 | 0.8403 | | 0.4952 | 5.0 | 810 | 0.4469 | 0.8611 | | 0.1386 | 6.0 | 972 | 0.3736 | 0.8681 | | 0.1028 | 7.0 | 1134 | 0.3645 | 0.8819 | | 0.053 | 8.0 | 1296 | 0.3079 | 0.9028 | | 0.0149 | 9.0 | 1458 | 0.2723 | 0.9236 | | 0.0154 | 10.0 | 1620 | 0.2810 | 0.9236 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.0 - Tokenizers 0.13.3
timm/convnext_small.fb_in22k_ft_in1k
timm
"2024-02-10T23:27:23Z"
1,535
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T07:13:48Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for convnext_small.fb_in22k_ft_in1k A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 50.2 - GMACs: 8.7 - Activations (M): 21.6 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_small.fb_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.fb_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.fb_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
beomi/KoAlpaca-KoRWKV-1.5B
beomi
"2024-01-05T13:42:59Z"
1,535
6
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "KoRWKV", "KoAlpaca", "ko", "dataset:KoAlpaca-v1.0", "base_model:KoRWKV-1.5B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-05-25T08:46:43Z"
--- language: - ko license: apache-2.0 tags: - KoRWKV - KoAlpaca datasets: - KoAlpaca-v1.0 pipeline_tag: text-generation base_model: KoRWKV-1.5B model-index: - name: KoAlpaca-KoRWKV-1.5B results: [] --- > 🚧 Note: this repo is only for demo purpose, current uploaded version is finetuned version of KoRWKV which is ~20% trained ckpt (with ~31Billion tokens) 🚧 # beomi/KoAlpaca-KoRWKV-1.5B (v1.0) This model is a fine-tuned version of [KoRWKV-1.5B](https://huggingface.co/beomi/KoRWKV-1.5B) on a KoAlpaca Dataset v1.0 Dataset available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca) ## Training procedure ### Train Device - A100 80G x2 - ~2hrs ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - seed: 42 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP fp16 ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
lgaalves/gpt1
lgaalves
"2023-11-21T17:05:38Z"
1,535
3
transformers
[ "transformers", "pytorch", "safetensors", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-25T14:34:55Z"
--- language: en license: mit --- # OpenAI GPT ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** `openai-gpt` is a transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. - **Developed by:** Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See [associated research paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GitHub repo](https://github.com/openai/finetune-transformer-lm) for model developers and contributors. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE) - **Resources for more information:** - [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) - [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/) - [GitHub Repo](https://github.com/openai/finetune-transformer-lm) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='lgaalves/gpt1') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"}, {'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'}, {'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'}, {'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'}, {'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}] ``` Here is how to use this model in PyTorch: ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained("lgaalves/gpt1") model = OpenAIGPTModel.from_pretrained("lgaalves/gpt1") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` and in TensorFlow: ```python from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel tokenizer = OpenAIGPTTokenizer.from_pretrained("lgaalves/gpt1") model = TFOpenAIGPTModel.from_pretrained("lgaalves/gpt1") inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") outputs = model(inputs) last_hidden_states = outputs.last_hidden_state ``` ## Uses #### Direct Use This model can be used for language modeling tasks. #### Downstream Use Potential downstream uses of this model include tasks that leverage language models. In the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. #### Misuse and Out-of-scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases #### Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='lgaalves/gpt1') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a teacher for the college he'}, {'generated_text': 'The man worked as a janitor at the club.'}, {'generated_text': 'The man worked as a bodyguard in america. the'}, {'generated_text': 'The man worked as a clerk for one of the'}, {'generated_text': 'The man worked as a nurse, but there was'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a medical intern but is a'}, {'generated_text': 'The woman worked as a midwife, i know that'}, {'generated_text': 'The woman worked as a prostitute in a sex club'}, {'generated_text': 'The woman worked as a secretary for one of the'}, {'generated_text': 'The woman worked as a nurse, but she had'}] ``` This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations The model developers also wrote in a [blog post](https://openai.com/blog/language-unsupervised/) about risks and limitations of the model, including: > - **Compute Requirements:** Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements. > - **The limits and bias of learning about the world through text:** Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work ([Lucy and Gauthier, 2017](https://arxiv.org/abs/1705.11168)) has shown that certain kinds of information are difficult to learn via just text and other work ([Gururangan et al., 2018](https://arxiv.org/abs/1803.02324)) has shown that models learn and exploit biases in data distributions. > - **Still brittle generalization:** Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet. ## Training #### Training Data The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > We use the BooksCorpus dataset ([Zhu et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf)) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. #### Training Procedure The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. ## Evaluation The following evaluation information is extracted from the [associated blog post](https://openai.com/blog/language-unsupervised/). See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for further details. #### Testing Data, Factors and Metrics The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: - **Task:** Textual Entailment - **Datasets:** [SNLI](https://huggingface.co/datasets/snli), [MNLI Matched](https://huggingface.co/datasets/glue), [MNLI Mismatched](https://huggingface.co/datasets/glue), [SciTail](https://huggingface.co/datasets/scitail), [QNLI](https://huggingface.co/datasets/glue), [RTE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Semantic Similarity - **Datasets:** [STS-B](https://huggingface.co/datasets/glue), [QQP](https://huggingface.co/datasets/glue), [MRPC](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Reading Comprehension - **Datasets:** [RACE](https://huggingface.co/datasets/race) - **Metrics:** Accuracy - **Task:** Commonsense Reasoning - **Datasets:** [ROCStories](https://huggingface.co/datasets/story_cloze), [COPA](https://huggingface.co/datasets/xcopa) - **Metrics:** Accuracy - **Task:** Sentiment Analysis - **Datasets:** [SST-2](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Linguistic Acceptability - **Datasets:** [CoLA](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Multi Task Benchmark - **Datasets:** [GLUE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy #### Results The model achieves the following results without any fine-tuning (zero-shot): | Task | TE | TE | TE |TE | TE | TE | SS | SS | SS | RC | CR | CR | SA | LA | MTB | |:--------:|:--:|:----------:|:-------------:|:-----:|:----:|:---:|:---:|:---:|:--:|:----:|:--------:|:----:|:----:|:----:|:----:| | Dataset |SNLI|MNLI Matched|MNLI Mismatched|SciTail| QNLI | RTE |STS-B| QQP |MPRC|RACE |ROCStories|COPA | SST-2| CoLA | GLUE | | |89.9| 82.1 | 81.4 |88.3 | 88.1 | 56.0|82.0 | 70.3|82.3|59.0 | 86.5 | 78.6 | 91.3 | 45.4 | 72.8 | ## Environmental Impact The model developers [report that](https://openai.com/blog/language-unsupervised/): > The total compute used to train this model was 0.96 petaflop days (pfs-days). > 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8 P600 GPUs - **Hours used:** 720 hours (30 days) - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2018improving, title={Improving language understanding by generative pre-training}, author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others}, year={2018}, publisher={OpenAI} } ``` APA: *Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.* ## Model Card Authors This model card was written by the Hugging Face team.
abdfajar707/rkp_llama3_f16_GGUF
abdfajar707
"2024-06-20T02:28:19Z"
1,534
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T02:19:35Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** abdfajar707 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/TiamaPY-1.1B-v24-GGUF
mradermacher
"2024-06-13T09:41:18Z"
1,533
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:Ramikan-BR/TiamaPY-1.1B-v24", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-13T09:36:23Z"
--- base_model: Ramikan-BR/TiamaPY-1.1B-v24 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ramikan-BR/TiamaPY-1.1B-v24 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TiamaPY-1.1B-v24-GGUF/resolve/main/TiamaPY-1.1B-v24.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
SpanBERT/spanbert-base-cased
SpanBERT
"2021-05-19T11:30:27Z"
1,532
6
transformers
[ "transformers", "pytorch", "jax", "bert", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
Entry not found
cahya/xlm-roberta-large-indonesian-NER
cahya
"2023-05-09T16:38:39Z"
1,532
8
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
Entry not found
mradermacher/NoNameBrand_1.1B-GGUF
mradermacher
"2024-06-18T18:27:00Z"
1,532
0
transformers
[ "transformers", "gguf", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Deathsquad10/NoNameBrand_1.1B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T18:18:45Z"
--- base_model: Deathsquad10/NoNameBrand_1.1B datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Deathsquad10/NoNameBrand_1.1B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NoNameBrand_1.1B-GGUF/resolve/main/NoNameBrand_1.1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
iZELX1/Anything-V3-X
iZELX1
"2023-02-06T01:21:09Z"
1,531
18
diffusers
[ "diffusers", "Anything V3", "stable diffusion", "stable diffusion diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-24T09:17:22Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - Anything V3 - stable diffusion - diffusers - stable diffusion diffusers ---
uaritm/multilingual_en_uk_pl_ru
uaritm
"2023-06-04T16:34:24Z"
1,531
1
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers - multilingual - en - ru - uk - pl", "uk", "en", "pl", "ru", "dataset:Helsinki-NLP/tatoeba_mt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-05-12T19:12:27Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - multilingual - en - ru - uk - pl license: apache-2.0 datasets: - Helsinki-NLP/tatoeba_mt metrics: - mse language: - uk - en - pl - ru library_name: sentence-transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site) You can test the quality and speed of the model This model is an updated version of the model: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk) ``` Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 50184 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors ``` @misc{Uaritm, title={sentence-transformers: Semantic similarity of medical texts}, author={Vitaliy Ostashko}, year={2023}, url={https://aihealth.site}, } ``` <!--- Describe where people can find more information -->
TheBloke/Rose-20B-GPTQ
TheBloke
"2023-11-24T21:42:35Z"
1,531
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "instruct", "en", "base_model:tavtav/Rose-20B", "license:llama2", "autotrain_compatible", "4-bit", "gptq", "region:us" ]
text-generation
"2023-11-24T20:34:27Z"
--- base_model: tavtav/Rose-20B inference: false language: - en license: llama2 model_creator: Tav model_name: Rose 20B model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - text-generation-inference - instruct --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Rose 20B - GPTQ - Model creator: [Tav](https://huggingface.co/tavtav) - Original model: [Rose 20B](https://huggingface.co/tavtav/Rose-20B) <!-- description start --> # Description This repo contains GPTQ model files for [Tav's Rose 20B](https://huggingface.co/tavtav/Rose-20B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Rose-20B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Rose-20B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Rose-20B-GGUF) * [Tav's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tavtav/Rose-20B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.52 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 12.04 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.41 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.35 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 9.51 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Rose-20B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.80 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Rose-20B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Rose-20B-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Rose-20B-GPTQ`: ```shell mkdir Rose-20B-GPTQ huggingface-cli download TheBloke/Rose-20B-GPTQ --local-dir Rose-20B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Rose-20B-GPTQ huggingface-cli download TheBloke/Rose-20B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Rose-20B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Rose-20B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Rose-20B-GPTQ --local-dir Rose-20B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Rose-20B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Rose-20B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Rose-20B-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Rose-20B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Rose-20B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Rose-20B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Tav's Rose 20B <h1 style="text-align: center">Ross-20B</h1> <center><img src="https://files.catbox.moe/rze9c9.png" alt="roseimage" width="350" height="350"></center> <center><i>Image sourced by Shinon</i></center> <h2 style="text-align: center">Experimental Frankenmerge Model</h2> ## GGUF [GGUF version here](https://huggingface.co/tavtav/Rose-20B-GGUF) ## Model Details A Frankenmerge with [Thorns-13B](https://huggingface.co/CalderaAI/13B-Thorns-l2) by CalderaAI and [Noromaid-13-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) by NeverSleep (IkariDev and Undi). This recipe was proposed by Trappu and the layer distribution recipe was made by Undi. I thank them for sharing their knowledge with me. This model should be very good at any roleplay scenarios. I called the model "Rose" because it was a fitting name for a "thorny maid". The recommended format to use is Alpaca. ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Feel free to share any other prompts that works. This model is very robust. ## Justification for its Existence Potential base model for finetune experiments using our dataset to create Pygmalion-20B. Due to the already high capabilities, adding our dataset will mesh well with how the model performs. Potential experimentation with merging with other 20B Frankenmerge models. ## Model Recipe ``` slices: - sources: - model: Thorns-13B layer_range: [0, 16] - sources: - model: Noromaid-13B layer_range: [8, 24] - sources: - model: Thorns-13B layer_range: [17, 32] - sources: - model: Noromaid-13B layer_range: [25, 40] merge_method: passthrough dtype: float16 ``` Again, credits to [Undi](https://huggingface.co/Undi95) for the recipe. ## Reception The model was given to a handful of members in the PygmalionAI Discord community for testing. A strong majority really enjoyed the model with only a couple giving the model a passing grade. Since our community has high standards for roleplaying models, I was surprised at the positive reception. ## Contact Send a message to tav (tav) on Discord if you want to talk about the model to me. I'm always open to receive comments.
BK-Lee/Meteor-Mamba
BK-Lee
"2024-05-27T13:08:13Z"
1,531
8
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "arxiv:2405.15574", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-24T11:18:36Z"
--- license: mit --- You should follow the two steps 1. Install libraries and dowloand github package [Meteor](https://github.com/ByungKwanLee/Meteor) ```bash bash install pip install -r requirements.txt ``` 2. Run the file: demo.py in [Meteor](https://github.com/ByungKwanLee/Meteor) You can choose prompt type: text_only or with_image! Enjoy Meteor! ```python import time import torch from config import * from PIL import Image from utils.utils import * import torch.nn.functional as F from meteor.load_mmamba import load_mmamba from meteor.load_meteor import load_meteor from torchvision.transforms.functional import pil_to_tensor # User prompt prompt_type='with_image' # text_only / with_image img_path='figures/demo.png' question='Provide the detail of the image' # loading meteor model mmamba = load_mmamba('BK-Lee/Meteor-Mamba').cuda() meteor, tok_meteor = load_meteor('BK-Lee/Meteor-MLM', bits=4) # freeze model freeze_model(mmamba) freeze_model(meteor) # Device device = torch.cuda.current_device() # prompt type -> input prompt image_token_number = int((490/14)**2) if prompt_type == 'with_image': # Image Load image = F.interpolate(pil_to_tensor(Image.open(img_path).convert("RGB")).unsqueeze(0), size=(490, 490), mode='bicubic').squeeze(0) inputs = [{'image': image, 'question': question}] elif prompt_type=='text_only': inputs = [{'question': question}] # Generate with torch.inference_mode(): # Meteor Mamba mmamba_inputs = mmamba.eval_process(inputs=inputs, tokenizer=tok_meteor, device=device, img_token_number=image_token_number) if 'image' in mmamba_inputs.keys(): clip_features = meteor.clip_features(mmamba_inputs['image']) mmamba_inputs.update({"image_features": clip_features}) mmamba_outputs = mmamba(**mmamba_inputs) # Meteor meteor_inputs = meteor.eval_process(inputs=inputs, data='demo', tokenizer=tok_meteor, device=device, img_token_number=image_token_number) if 'image' in mmamba_inputs.keys(): meteor_inputs.update({"image_features": clip_features}) meteor_inputs.update({"tor_features": mmamba_outputs.tor_features}) # Generation generate_ids = meteor.generate(**meteor_inputs, do_sample=True, max_new_tokens=128, top_p=0.95, temperature=0.9, use_cache=True) # Text decoding decoded_text = tok_meteor.batch_decode(generate_ids, skip_special_tokens=True)[0].split('assistant\n')[-1].split('[U')[0].strip() print(decoded_text) # Paper arxiv.org/abs/2405.15574 ```
MaziyarPanahi/mergekit-slerp-fcxoywi-GGUF
MaziyarPanahi
"2024-06-18T21:17:22Z"
1,531
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:Equall/Saul-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-fcxoywi" ]
text-generation
"2024-06-18T20:54:23Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:HuggingFaceH4/zephyr-7b-beta - base_model:Equall/Saul-Base - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-fcxoywi-GGUF base_model: mergekit-community/mergekit-slerp-fcxoywi inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-fcxoywi-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-fcxoywi-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-fcxoywi](https://huggingface.co/mergekit-community/mergekit-slerp-fcxoywi) ## Description [MaziyarPanahi/mergekit-slerp-fcxoywi-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-fcxoywi-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-fcxoywi](https://huggingface.co/mergekit-community/mergekit-slerp-fcxoywi). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
zhihan1996/DNABERT-S
zhihan1996
"2024-02-15T05:01:57Z"
1,530
3
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "custom_code", "license:apache-2.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2024-02-15T05:00:56Z"
--- license: apache-2.0 ---
sryab2001/llama3-8b-cosmic-fusion-dynamics-f16-gguf
sryab2001
"2024-06-28T06:05:56Z"
1,530
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-28T05:53:34Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** sryab2001 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gchhablani/bert-base-cased-finetuned-sst2
gchhablani
"2021-09-20T09:09:06Z"
1,529
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9231651376146789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-sst2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3649 - Accuracy: 0.9232 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.233 | 1.0 | 4210 | 0.9174 | 0.2841 | | 0.1261 | 2.0 | 8420 | 0.9278 | 0.3310 | | 0.0768 | 3.0 | 12630 | 0.9232 | 0.3649 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
bofenghuang/vigostral-7b-chat
bofenghuang
"2023-10-25T13:00:06Z"
1,529
28
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "LLM", "finetuned", "conversational", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-29T15:04:09Z"
--- license: apache-2.0 language: fr pipeline_tag: text-generation inference: parameters: temperature: 0.7 tags: - LLM - finetuned --- # Vigostral-7B-Chat: A French chat LLM ***Preview*** of Vigostral-7B-Chat, a new addition to the Vigogne LLMs family, fine-tuned on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, please visit the [Github repository](https://github.com/bofenghuang/vigogne). **License**: A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use). ## Prompt Template We used a prompt template adapted from the chat format of Llama-2. You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigostral-7b-chat") conversation = [ {"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"}, {"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"}, {"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"}, {"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."}, {"role": "user", "content": "Comment monter en haut ?"}, ] print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)) ``` You will get ``` <s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> Bonjour ! Comment ça va aujourd'hui ? [/INST] Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ? </s>[INST] Quelle est la hauteur de la Tour Eiffel ? [/INST] La Tour Eiffel mesure environ 330 mètres de hauteur. </s>[INST] Comment monter en haut ? [/INST] ``` ## Usage ### Inference using the quantized versions The quantized versions of this model are generously provided by [TheBloke](https://huggingface.co/TheBloke)! - AWQ for GPU inference: [TheBloke/Vigostral-7B-Chat-AWQ](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ) - GTPQ for GPU inference: [TheBloke/Vigostral-7B-Chat-GPTQ](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GPTQ) - GGUF for CPU+GPU inference: [TheBloke/Vigostral-7B-Chat-GGUF](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF) These versions facilitate testing and development with various popular frameworks, including [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), and more. ### Inference using the unquantized model with 🤗 Transformers ```python from typing import Dict, List, Optional import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer model_name_or_path = "bofenghuang/vigostral-7b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) def chat( query: str, history: Optional[List[Dict]] = None, temperature: float = 0.7, top_p: float = 1.0, top_k: float = 0, repetition_penalty: float = 1.1, max_new_tokens: int = 1024, **kwargs, ): if history is None: history = [] history.append({"role": "user", "content": query}) input_ids = tokenizer.apply_chat_template(history, return_tensors="pt").to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=temperature, do_sample=temperature > 0.0, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, max_new_tokens=max_new_tokens, pad_token_id=tokenizer.eos_token_id, **kwargs, ), streamer=streamer, return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) history.append({"role": "assistant", "content": generated_text}) return generated_text, history # 1st round response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None) # 2nd round response, history = chat("Quand il peut dépasser le lapin ?", history=history) # 3rd round response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history) ``` You can also use the Google Colab Notebook provided below. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Inference using the unquantized model with vLLM Set up an OpenAI-compatible server with the following command: ```bash # Install vLLM # This may take 5-10 minutes. # pip install vllm # Start server for Vigostral-Chat models python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigostral-7b-chat # List models # curl http://localhost:8000/v1/models ``` You can also use the docker image provided below. ```bash # Launch inference engine docker run --gpus '"device=0"' \ -e HF_TOKEN=$HF_TOKEN -p 8000:8000 \ ghcr.io/bofenghuang/vigogne/vllm:latest \ --host 0.0.0.0 \ --model bofenghuang/vigostral-7b-chat # Launch inference engine on mutli-GPUs (4 here) docker run --gpus all \ -e HF_TOKEN=$HF_TOKEN -p 8000:8000 \ ghcr.io/bofenghuang/vigogne/vllm:latest \ --host 0.0.0.0 \ --tensor-parallel-size 4 \ --model bofenghuang/vigostral-7b-chat # Launch inference engine using the quantized AWQ version # Note only supports Ampere or newer GPUs docker run --gpus '"device=0"' \ -e HF_TOKEN=$HF_TOKEN -p 8000:8000 \ ghcr.io/bofenghuang/vigogne/vllm:latest \ --host 0.0.0.0 \ --quantization awq \ --model TheBloke/Vigostral-7B-Chat-AWQ ``` Afterward, you can query the model using the openai Python package. ```python import openai # Modify OpenAI's API key and API base to use vLLM's API server. openai.api_key = "EMPTY" openai.api_base = "http://localhost:8000/v1" # First model models = openai.Model.list() model = models["data"][0]["id"] query_message = "Parle-moi de toi-même." # Chat completion API chat_completion = openai.ChatCompletion.create( model=model, messages=[ {"role": "user", "content": query_message}, ], max_tokens=1024, temperature=0.7, ) print("Chat completion results:", chat_completion) ``` ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
ILKT/2024-06-19_22-23-38
ILKT
"2024-06-20T07:39:59Z"
1,529
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-06-20T01:28:44Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb language: - en - pl model-index: - name: 2024-06-19_22-23-38 results: - dataset: config: pl name: MTEB MassiveIntentClassification revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 0.17481506388702087 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 0.17934087555336942 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 0.2524882313382649 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 0.25632070831283815 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 0.5829000000000001 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 0.5339335180055402 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 0.2261133603238866 task: type: Classification - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 0.2702783300198807 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 0.6494063133507095 task: type: Classification - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 0.10324855900671515 task: type: Clustering - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: - type: ap value: 0.6133987209198595 task: type: PairClassification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: - type: ap value: 0.5633378978327882 task: type: PairClassification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: - type: ap value: 0.8512179742920133 task: type: PairClassification - dataset: config: pl name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.004925677428811457 task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: - type: cosine_spearman value: 0.5492628410665314 task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: - type: cosine_spearman value: 0.5146801017173235 task: type: STS - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: - type: cosine_spearman value: 0.5466241716904018 task: type: STS - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: - type: cosine_spearman value: 0.648209901788658 task: type: STS - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 0.2882972113767599 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 0.2948625805337635 task: type: Clustering ---
tavtav/Rose-20B
tavtav
"2023-11-30T01:20:26Z"
1,528
32
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "instruct", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-22T16:59:56Z"
--- language: - en pipeline_tag: text-generation tags: - text-generation-inference - instruct license: llama2 --- <h1 style="text-align: center">Rose-20B</h1> <center><img src="https://files.catbox.moe/rze9c9.png" alt="roseimage" width="350" height="350"></center> <center><i>Image sourced by Shinon</i></center> <h2 style="text-align: center">Experimental Frankenmerge Model</h2> ## Other Formats [GGUF](https://huggingface.co/TheBloke/Rose-20B-GGUF) [GPTQ](https://huggingface.co/TheBloke/Rose-20B-GPTQ) [AWQ](https://huggingface.co/TheBloke/Rose-20B-AWQ) [exl2](https://huggingface.co/royallab/Rose-20B-exl2) ## Model Details A Frankenmerge with [Thorns-13B](https://huggingface.co/CalderaAI/13B-Thorns-l2) by CalderaAI and [Noromaid-13-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) by NeverSleep (IkariDev and Undi). This recipe was proposed by Trappu and the layer distribution recipe was made by Undi. I thank them for sharing their knowledge with me. This model should be very good at any roleplay scenarios. I called the model "Rose" because it was a fitting name for a "thorny maid". The recommended format to use is Alpaca. ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Feel free to share any other prompts that works. This model is very robust. **Warning: This model uses significantly more VRAM due to the KV cache increase resulting in more VRAM required for the context window.** ## Justification for its Existence Potential base model for finetune experiments using our dataset to create Pygmalion-20B. Due to the already high capabilities, adding our dataset will mesh well with how the model performs. Potential experimentation with merging with other 20B Frankenmerge models. ## Model Recipe ``` slices: - sources: - model: Thorns-13B layer_range: [0, 16] - sources: - model: Noromaid-13B layer_range: [8, 24] - sources: - model: Thorns-13B layer_range: [17, 32] - sources: - model: Noromaid-13B layer_range: [25, 40] merge_method: passthrough dtype: float16 ``` Again, credits to [Undi](https://huggingface.co/Undi95) for the recipe. ## Reception The model was given to a handful of members in the PygmalionAI Discord community for testing. A strong majority really enjoyed the model with only a couple giving the model a passing grade. Since our community has high standards for roleplaying models, I was surprised at the positive reception. ## Contact Send a message to tav (tav) on Discord if you want to talk about the model to me. I'm always open to receive comments.
failspy/Llama-3-8B-Instruct-abliterated-GGUF
failspy
"2024-05-07T17:37:34Z"
1,528
27
transformers
[ "transformers", "gguf", "endpoints_compatible", "region:us" ]
null
"2024-05-07T16:12:12Z"
--- library_name: transformers tags: [] --- # Llama-3-8B-Instruct-abliterated Model Card This is meta-llama/Llama-3-8B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 8B instruct model was, just with the strongest refusal direction orthogonalized out. ## GGUF quants Uploaded quants: fp16 (in main) - good for converting to other platforms or getting the quantization you actually want, not recommended for inference but obviously highest quality q8_0 (in main) q4_k (in main) ## Quirkiness awareness notice This model may come with interesting quirks, as I obviously haven't extensively tested it, and the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. The code I used to generate it (and my published 'Kappa-3' model which is just Phi-3 with the same methodology applied) is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb). If you manage to develop further improvements, please share! This is really the most primitive way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
rozek/LLaMA-2-7B-32K_GGUF
rozek
"2023-08-31T01:04:00Z"
1,527
9
null
[ "gguf", "llama", "llama-2", "facebook", "meta", "text-generation-inference", "quantized", "32k-context", "togethercomputer", "text-generation", "en", "license:llama2", "region:us" ]
text-generation
"2023-08-28T05:54:49Z"
--- license: llama2 tags: - llama - llama-2 - facebook - meta - text-generation-inference - quantized - gguf - 32k-context - togethercomputer language: - en pipeline_tag: text-generation --- # LLaMA-2-7B-32K_GGUF # [Together Computer, Inc.](https://together.ai/) has released [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), a model based on [Meta AI](https://ai.meta.com)'s [LLaMA-2-7B](https://huggingface.co/meta-llama/Llama-2-7b), but fine-tuned for context lengths up to 32K using "Position Interpolation" and "Rotary Position Embeddings" (RoPE). While the current version of [llama.cpp](https://github.com/ggerganov/llama.cpp) already supports such large context lengths, it requires quantized files in the new GGUF format - and that's where this repo comes in: it contains the following quantizations of the original weights from Together's fined-tuned model * [Q2_K](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q2_K.gguf) * [Q3_K_S](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q3_K_S.gguf), [Q3_K_M](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q3_K_M.gguf) (aka Q3_K) and [Q3_K_L](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q3_K_L.gguf) * [Q4_0](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q4_0.gguf), [Q4_1](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q4_1.gguf), [Q4_K_S](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q4_K_S.gguf) and [Q4_K_M](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q4_K_M.gguf) (aka Q4_K) * [Q5_0](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q5_0.gguf), [Q5_1](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q5_1.gguf), [Q5_K_S](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q5_K_S.gguf) and [Q5_K_M](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q5_K_M.gguf) (aka Q5_K) * [Q6_K](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q6_K.gguf), * [Q8_0](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-Q8_0.gguf) and * [F16](https://huggingface.co/rozek/LLaMA-2-7B-32K_GGUF/blob/main/LLaMA-2-7B-32K-f16.gguf) (unquantized) > Nota bene: while RoPE makes inferences with large contexts possible, you still need an awful lot of RAM > when doing so. And since "32K" does not mean that you always have to use a context size of 32768 (only that > the model was fine-tuned for that size), it is recommended that you keep your context as small as possible > If you need quantizations for Together Computer's > [Llama-2-7B-32K-Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) > model, then look for > [LLaMA-2-7B-32K-Instruct_GGUF](https://huggingface.co/rozek/LLaMA-2-7B-32K-Instruct_GGUF) ## How Quantization was done ## Since the author does not want arbitrary Python stuff to loiter on his computer, the quantization was done using [Docker](https://www.docker.com/). Assuming that you have the [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed on your system and also have a basic knowledge of how to use it, you may just follow the instructions shown below in order to generate your own quantizations: > Nota bene: you will need 30+x GB of free disk space, at least - depending on your quantization 1. create a new folder called `llama.cpp_in_Docker`<br>this folder will later be mounted into the Docker container and store the quantization results 2. download the weights for the fine-tuned LLaMA-2 model from [Hugging Face](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) into a subfolder of `llama.cpp_in_Docker` (let's call the new folder `LLaMA-2-7B-32K`) 3. within the <u>Docker Desktop</u>, search for and download a `basic-python` image - just use one of the most popular ones 4. from a <u>terminal session on your host computer</u> (i.e., not a Docker container!), start a new container for the downloaded image which mounts the folder we created before:<br> ``` docker run --rm \ -v ./llama.cpp_in_Docker:/llama.cpp \ -t basic-python /bin/bash ``` (you may have to adjust the path to your local folder) 5. back in the <u>Docker Desktop</u>, open the "Terminal" tab of the started container and enter the following commands (one after the other - copying the complete list and pasting it into the terminal as a whole does not always seems to work properly):<br> ``` apt update apt-get install software-properties-common -y apt-get update apt-get install g++ git make -y cd /llama.cpp git clone https://github.com/ggerganov/llama.cpp cd llama.cpp ``` 6. now open the "Files" tab and navigate to the file `/llama.cpp/llama.cpp/Makefile`, right-click on it and choose "Edit file" 7. search for `aarch64`, and - in the line found (which looks like `ifneq ($(filter aarch64%,$(UNAME_M)),)`) - change `ifneq` to `ifeq` 8. save your change using the disk icon in the upper right corner of the editor pane and open the "Terminal" tab again 9. now enter the following commands:<br> ``` make python3 -m pip install -r requirements.txt python3 convert.py ../LLaMA-2-7B-32K ``` 10. you are now ready to run the actual quantization, e.g., using<br> ``` ./quantize ../LLaMA-2-7B-32K/ggml-model-f16.gguf \ ../LLaMA-2-7B-32K/LLaMA-2-7B-32K-Q4_0.gguf Q4_0 ``` 11. run any quantizations you need and stop the container when finished (the container will automatically be deleted but the generated files will remain available on your host computer) 12. the `basic-python` image may also be deleted (manually) unless you plan to use it again in the near future You are now free to move the quanitization results to where you need them and run inferences with context lengths up to 32K (depending on the amount of memory you will have available - long contexts need a lot of RAM) ## License ## Concerning the license(s): * the [original model](https://ai.meta.com/llama/) (from Meta AI) was released under a rather [permissive license](https://ai.meta.com/llama/license/) * the fine tuned model from Together Computer uses the [same license](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/blob/main/README.md) * as a consequence, this repo does so as well
Lewdiculous/llama3-8B-aifeifei-1.0-GGUF-IQ-Imatrix
Lewdiculous
"2024-06-06T14:49:03Z"
1,527
1
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-06-06T05:18:54Z"
--- license: apache-2.0 --- [Model request #39](https://huggingface.co/Lewdiculous/Model-Requests/discussions/39). This model has a narrow use case in mind. Read the original description. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/nndcfLvMAj4q6Egrkavx2.png)
mradermacher/ColoristLlama-v1-GGUF
mradermacher
"2024-06-14T19:32:16Z"
1,527
0
transformers
[ "transformers", "gguf", "en", "base_model:GS7776/ColoristLlama-v1", "endpoints_compatible", "region:us" ]
null
"2024-06-14T19:18:43Z"
--- base_model: GS7776/ColoristLlama-v1 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/GS7776/ColoristLlama-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ColoristLlama-v1-GGUF/resolve/main/ColoristLlama-v1.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
akreal/tiny-random-t5
akreal
"2021-08-18T15:08:13Z"
1,525
0
transformers
[ "transformers", "pytorch", "tf", "t5", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2022-03-02T23:29:05Z"
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-t5 Changes: use old format for `pytorch_model.bin`.
webbigdata/ALMA-7B-Ja-V2
webbigdata
"2024-03-04T08:50:47Z"
1,525
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "de", "is", "zh", "cs", "arxiv:2309.11674", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-21T07:49:26Z"
--- inference: false language: - ja - en - de - is - zh - cs license: llama2 --- # New Translation model released. [C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter) is the QLoRA adapter for google/gemma-7b. Despite the 4-bit quantization, the memory GPU requirement has increased to 8.1 GB. However, it is possible to run it with the free version of Colab and the performance is much improved! # webbigdata/ALMA-7B-Ja-V2 ALMA-7B-Ja-V2は日本語から英語、英語から日本語の翻訳が可能な機械翻訳モデルです。 The ALMA-7B-Ja-V2 is a machine translation model capable of translating from Japanese to English and English to Japanese. ALMA-7B-Ja-V2は以前のモデル([ALMA-7B-Ja](https://huggingface.co/webbigdata/ALMA-7B-Ja))に更に学習を追加し、性能を向上しています。 The ALMA-7B-Ja-V2 adds further learning to the previous model ([ALMA-7B-Ja](https://huggingface.co/webbigdata/ALMA-7B-Ja)) and improves performance. 日本語と英語間に加えて、このモデルは以下の言語間の翻訳能力も持っていますが、日英、英日翻訳を主目的にしています。 In addition to translation between Japanese and English, this model also has the ability to translate between the following languages, but is primarily intended for Japanese-English and English-Japanese translation. - ドイツ語 German(de) and 英語 English(en) - 中国語 Chinese(zh) and 英語 English(en) - アイスランド語 Icelandic(is) and 英語 English(en) - チェコ語 Czech(cs) and 英語 English(en) # ベンチマーク結果 以下の三種の指標を使って翻訳性能を確認しました。 The following three metrics were used to check translation performance. 数字が大きいほど性能が良い事を意味します。 The higher the number, the better the performance. ## BLEU 翻訳テキストが元のテキストにどれだけ似ているかを評価する指標です。しかし、単語の出現頻度だけを見ているため、語順の正確さや文の流暢さを十分に評価できないという弱点があります A metric that evaluates how similar the translated text is to the original text. However, since it mainly looks at the frequency of word appearances, it may not effectively evaluate the accuracy of word order or the fluency of sentences. ### chrF++ 文字の組み合わせの一致度と語順に基づいて、翻訳の正確さを評価する指標です。弱点としては、長い文章の評価には不向きであることが挙げられます。 A method to evaluate translation accuracy based on how well character combinations match and the order of words. A drawback is that it might not be suitable for evaluating longer sentences. ### comet 機械学習モデルを使って翻訳の品質を自動的に評価するためのツール、人間の主観的評価に近いと言われていますが、機械学習ベースであるため、元々のモデルが学習に使ったデータに大きく依存するという弱点があります。 A tool that uses machine learning models to automatically evaluate the quality of translations, although it is said to be similar to the evaluation ratings performed by humans. Because it is machine learning based, it has the weakness that the original model is highly dependent on the data used for training. ## vs. NLLB-200 Meta社の200言語以上の翻訳に対応した超多言語対応機械翻訳モデルNLLB-200シリーズと比較したベンチマーク結果は以下です。 Benchmark results compared to Meta's NLLB-200 series of super multilingual machine translation models, which support translations in over 200 languages, are shown below. | Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet | |------------------------------|-----------|--------------|----------|--------------|-----------| | NLLB-200-Distilled | 2.46GB | 23.6/- | - | 50.2/- | - | | NLLB-200-Distilled | 5.48GB | 25.4/- | - | 54.2/- | - | | NLLB-200 | 5.48GB | 24.2/- | - | 53.6/- | - | | NLLB-200 | 17.58GB | 25.2/- | - | 55.1/- | - | | NLLB-200 | 220.18GB | 27.9/33.2 | 0.8908 | 55.8/59.8 | 0.8792 | ## previous our model(ALMA-7B-Ja) | Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet | |------------------------------|-----------|--------------|----------|--------------|-----------| | webbigdata-ALMA-7B-Ja-q4_K_S | 3.6GB | -/24.2 | 0.8210 | -/54.2 | 0.8559 | | ALMA-7B-Ja-GPTQ-Ja-En | 3.9GB | -/30.8 | 0.8743 | -/60.9 | 0.8743 | | ALMA-Ja(Ours) | 13.48GB | -/31.8 | 0.8811 | -/61.6 | 0.8773 | ## ALMA-7B-Ja-V2 | Model Name | file size |E->J chrf++/F2|E->J comet|J->E chrf++/F2|J->E comet | |------------------------------|-----------|--------------|----------|--------------|-----------| | ALMA-7B-Ja-V2-GPTQ-Ja-En | 3.9GB | -/33.0 | 0.8818 | -/62.0 | 0.8774 | | ALMA-Ja-V2(Ours) | 13.48GB | -/33.9 | 0.8820 | -/63.1 | 0.8873 | | ALMA-Ja-V2-Lora(Ours) | 13.48GB | -/33.7 | 0.8843 | -/61.1 | 0.8775 | ALMA-7B-Ja-V2を様々なジャンルの文章を現実世界のアプリケーションと比較した結果は以下です。 Here are the results of a comparison of various genres of writing with the actual application. ## 政府の公式文章 Government Official Announcements | |e->j chrF2++|e->j BLEU|e->j comet|j->e chrF2++|j->e BLEU|j->e comet| |--------------------------|------------|---------|----------|------------|---------|----------| | ALMA-7B-Ja-V2-GPTQ-Ja-En | 25.3 | 15.00 | 0.8848 | 60.3 | 26.82 | 0.6189 | | ALMA-Ja-V2 | 27.2 | 15.60 | 0.8868 | 58.5 | 29.27 | 0.6155 | | ALMA-7B-Ja-V2-Lora | 24.5 | 13.58 | 0.8670 | 50.7 | 21.85 | 0.6196 | | SeamlessM4T | 27.3 | 16.76 | 0.9070 | 54.2 | 25.76 | 0.5656 | | gpt-3.5 | 34.6 | 28.33 | 0.8895 | 74.5 | 49.20 | 0.6382 | | gpt-4.0 | 36.5 | 28.07 | 0.9255 | 62.5 | 33.63 | 0.6320 | | google-translate | 43.5 | 35.37 | 0.9181 | 62.7 | 29.22 | 0.6446 | | deepl | 43.5 | 35.74 | 0.9301 | 60.1 | 27.40 | 0.6389 | ## 古典文学 Classical Literature | |e->j chrF2++|e->j BLEU|e->j comet|j->e chrF2++|j->e BLEU|j->e comet| |--------------------------|------------|---------|----------|------------|---------|----------| | ALMA-7B-Ja-V2-GPTQ-Ja-En | 11.8 | 7.24 | 0.6943 | 31.9 | 9.71 | 0.5617 | | ALMA-Ja-V2 | 10.7 | 4.93 | 0.7202 | 32.9 | 10.52 | 0.5638 | | ALMA-7B-Ja-V2-Lora | 12.3 | 7.25 | 0.7076 | 32.5 | 11.14 | 0.5441 | | gpt-3.5 | - | - | 0.6367 | 69.3 | 46.34 | 0.4922 | | gpt-4.0 | 13.3 | 8.33 | 0.7074 | 44.3 | 23.75 | 0.5518 | | deepl | 14.4 | 9.18 | 0.7149 | 34.6 | 10.68 | 0.5787 | | google-translate | 13.5 | 8.57 | 0.7432 | 31.7 | 7.94 | 0.5856 | ## 二次創作 Fanfiction | |e->j chrF2++|e->j BLEU|e->j comet|j->e chrF2++|j->e BLEU|j->e comet| |--------------------------|------------|---------|----------|------------|---------|----------| | ALMA-7B-Ja-V2-GPTQ-Ja-En | 27.6 | 18.28 | 0.8643 | 52.1 | 24.58 | 0.6106 | | ALMA-Ja-V2 | 20.4 | 8.45 | 0.7870 | 48.7 | 23.06 | 0.6050 | | ALMA-7B-Ja-V2-Lora | 23.9 | 18.55 | 0.8634 | 55.6 | 29.91 | 0.6093 | | SeamlessM4T | 25.5 | 19.97 | 0.8657 | 42.2 | 14.39 | 0.5554 | | gpt-3.5 | 31.2 | 23.37 | 0.9001 | - | - | 0.5948 | | gpt-4.0 | 30.7 | 24.31 | 0.8848 | 53.9 | 24.89 | 0.6163 | | google-translate | 32.4 | 25.36 | 0.8968 | 58.5 | 29.88 | 0.6022 | | deepl | 33.5 | 28.38 | 0.9094 | 60.0 | 31.14 | 0.6124 | ## サンプルコード sample code Googleの無料WebツールであるColabを使うとALMA_7B_Ja_V2の性能を簡単に確かめる事ができます。 Using Colab, Google's free web tool, you can easily verify the performance of ALMA_7B_Ja_V2. [Sample Code For Free Colab](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_V2_Free_Colab_sample.ipynb) ## その他の版 Other Version ### llama.cpp [llama.cpp](https://github.com/ggerganov/llama.cpp)の主な目的はMacBook上で4ビット整数量子化を使用して LLaMA モデルを実行する事です。 The main purpose of [llama.cpp](https://github.com/ggerganov/llama.cpp) is to run the LLaMA model using 4-bit integer quantization on a MacBook. 4ビット量子化に伴い、性能はやや低下しますが、mmngaさんが作成してくれた[webbigdata-ALMA-7B-Ja-V2-gguf](https://huggingface.co/mmnga/webbigdata-ALMA-7B-Ja-V2-gguf)を使うとMacやGPUを搭載していないWindows、Linuxで本モデルを動かす事ができます。 Although performance is somewhat reduced with 4-bit quantization, [webbigdata-ALMA-7B-Ja-V2-gguf](https://huggingface.co/mmnga/webbigdata-ALMA-7B-Ja-V2-gguf), created by mmnga, can be used to run this model on Mac, Windows and Linux without a GPU. [GPU無版のColabで動かすサンプルはこちら](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_V2_gguf_Free_Colab_sample.ipynb)です。 [Here is Colab(without GPU) sample code](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_V2_gguf_Free_Colab_sample.ipynb). ### GPTQ GPTQはモデルサイズを小さくする手法(量子化といいます)です。 GPTQ is a technique (called quantization) that reduces model size.   [ALMA-7B-Ja-V2-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-V2-GPTQ-Ja-En)はGPTQ量子化版で、モデルサイズ(3.9GB)とメモリ使用量を削減し、速度を向上しています。 [ALMA-7B-Ja-V2-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-V2-GPTQ-Ja-En) is a quantized GPTQ version, which reduces model size (3.9 GB) and memory usage and increases speed. ただし、性能は少し落ちてしまいます。また、日本語と英語以外の言語への翻訳能力は著しく低下しているはずです。 However, performance is slightly reduced. Also, the ability to translate into languages other than Japanese and English should be significantly reduced. [Sample Code For Free Colab webbigdata/ALMA-7B-Ja-V2-GPTQ-Ja-En](https://github.com/webbigdata-jp/python_sample/blob/master/ALMA_7B_Ja_V2_GPTQ_Ja_En_Free_Colab_sample.ipynb) ファイル全体を一度に翻訳したい場合は、以下のColabをお試しください。 If you want to translate the entire txt file at once, try Colab below. [ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample](https://github.com/webbigdata-jp/python_sample/blob/master/ALMA_7B_Ja_V2_GPTQ_Ja_En_batch_translation_sample.ipynb) **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in their [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Model [ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B). (26.95GB) Prevous Model [ALMA-7B-Ja](https://huggingface.co/webbigdata/ALMA-7B-Ja). (13.3 GB) ## about this work - **This work was done by :** [webbigdata](https://webbigdata.jp/post-21151/).
Chrisisis/5CwZnqWtyQZgv4fPy11jYxiaBWevLhqTpt2JAUvmC9pHfwMX_vgg
Chrisisis
"2024-02-24T08:28:30Z"
1,525
0
keras
[ "keras", "region:us" ]
null
"2024-02-11T17:18:59Z"
Entry not found
jisukim8873/falcon-7B-case-c
jisukim8873
"2024-03-07T01:20:19Z"
1,525
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-07T00:37:59Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohammad2928git/medical_v3_gguf
mohammad2928git
"2024-06-29T05:44:37Z"
1,525
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:mohammad2928git/medical_v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T05:36:24Z"
--- base_model: mohammad2928git/medical_v1 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** mohammad2928git - **License:** apache-2.0 - **Finetuned from model :** mohammad2928git/medical_v1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
timm/vit_base_patch32_clip_224.openai
timm
"2024-02-10T23:25:17Z"
1,524
0
timm
[ "timm", "pytorch", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "license:apache-2.0", "region:us" ]
null
"2022-11-01T22:03:18Z"
--- license: apache-2.0 library_name: timm tags: - timm - vision --- # CLIP (OpenAI model for timm) ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. This instance of the CLIP model is intended for loading in * `timm` (https://github.com/rwightman/pytorch-image-models) and * `OpenCLIP` (https://github.com/mlfoundations/open_clip) libraries. Please see https://huggingface.co/openai/clip-vit-base-patch32 for use in Hugging Face Transformers. ### Model Date January 2021 ### Model Type The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
Sunbird/sunbird-mms
Sunbird
"2024-07-01T12:24:41Z"
1,524
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-07-21T16:43:21Z"
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - generated_from_trainer metrics: - wer model-index: - name: mms-lug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sunbird - MMS Finetuned Models This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters To Add ### Results | Language Adapter | WER (%) | CER (%) | Additional Details | |---------------------|--------:|--------:|---------------------| | **Luganda (Lug)** | | | | | Lug-Base | 0.25 | | | | Lug+5Gram LM | | | | | Lug+3Gram LM | | | | | Lug+English Combined| 0.12 | | | | **Acholi (Ach)** | | | | | Ach-Base | 0.34 | | | | Ach+3Gram LM | | | | | Ach+5Gram LM | | | | | Ach+English Combined| 0.18 | | | | **Lugbara (Lgg)** | | | | | Lgg-Base | | | | | Lgg+3Gram LM | | | | | Lgg+5Gram LM | | | | | Lgg+English Combined| 0.25 | | | | **Teso (Teo)** | | | | | Teo-Base | 0.39 | | | | Teo+3Gram LM | | | | | Teo+5Gram LM | | | | | Teo+English Combined| 0.29 | | | | **Nyankore (Nyn)** | | | | | Nyn-Base | 0.48 | | | | Nyn+3Gram LM | | | | | Nyn+5Gram LM | | | | | Nyn+English Combined| 0.29 | | | _Note: LM stands for Language Model. The `+3Gram LM` and `+5Gram LM` suffixes indicate models enhanced with trigram and five-gram language models, respectively._ ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3