Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
# luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF **At the time of converting this model, there was no Q6K version available.** This model was converted to GGUF format from [`lightblue/suzume-llama-3-8B-multilingual`](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) for more details on the model.
{"license": "other", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "license_name": "llama-3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE", "model-index": [{"name": "lightblue/suzume-llama-3-8B-multilingual", "results": []}]}
luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF
null
[ "gguf", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-24T13:34:06+00:00
text-generation
transformers
> [!TIP] > This is official GPTQ. Quantized using train data. # LYNN - AI for Roleplay <img src="./reallynn.png" alt="it's lynn!" width="340"/> > [!TIP] > This model is overfitted to the role-playing dataset; normal conversations may not work well. # Soliloquy-L3 Soliloquy-L3 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities. ## Model Info | Context Length | Parameter | Prompt Template | isErp | | --- | --- | --- | --- | | 24k(24576) | 8B | Llama 3 Chat | Partly | ## Prompt Template Use can you following jinja2 template. Which is identical to chat_template in [tokenizer_config](./tokenizer_config.json). ``` {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %} ``` ## License This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/) If you would like to use this model for commercial purposes, please use our proprietary API. (Currently avilable at OpenRouter) For non-commercial use, please adhere to the terms of the CC BY-NC-SA 4.0 license. You are free to share and adapt the model for non-commercial purposes, provided you give appropriate credit, indicate if changes were made, and do not imply endorsement by the licensor. For more information about the CC BY-NC 4.0 license, please visit: https://creativecommons.org/licenses/by-nc-sa/4.0/ If you have any questions or would like to inquire about licensing, please contact us. ## Llama 3 Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Join our Discord [**Join LYNN Discord**](https://discord.gg/xuZVqUyG4Y)
{"language": ["en"], "license": "cc-by-nc-sa-4.0"}
openlynn/Llama-3-Soliloquy-8B-GPTQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T13:34:12+00:00
null
null
{"license": "openrail"}
Zavid/Gorshok
null
[ "license:openrail", "region:us" ]
null
2024-04-24T13:34:19+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2", "quantized_by": "mradermacher"}
mradermacher/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2-GGUF
null
[ "transformers", "gguf", "trl", "sft", "generated_from_trainer", "en", "dataset:generator", "base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_Chinese_v2", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:34:26+00:00
null
null
{}
vangard703/DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-2-iteration-6e1-confidence
null
[ "region:us" ]
null
2024-04-24T13:35:03+00:00
null
null
{"license": "openrail"}
anonimoh656r7r65/boz
null
[ "license:openrail", "region:us" ]
null
2024-04-24T13:35:54+00:00
null
null
{"license": "openrail"}
Zavid/Volkova
null
[ "license:openrail", "region:us" ]
null
2024-04-24T13:36:28+00:00
text-generation
transformers
I dunno what I did. I kind of hecked together "Undi95/Llama-3-Unholy-8B-e4" and "dreamgen/WizardLM-2-7B". ![Fox1](https://huggingface.co/zuzuka17/LaZardy3_8b/resolve/main/wiz.jpg) I dont even know if it works.
{}
zuzuka17/LaZardy3_7.3B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:36:35+00:00
text-classification
transformers
# Spanish Fake News Classifier ## Overview This BERT-based text classifier was developed as a thesis project for the Computer Engineering degree at Universidad de Buenos Aires (UBA). The model is designed to detect fake news in Spanish and was fine-tuned on the *dccuchile/bert-base-spanish-wwm-uncased* model using a specific set of hyperparameters. It was trained on a dataset containing 125,000 Spanish news articles collected from various regions, both true and false. ## Team Members - **[Azul Fuentes](https://github.com/azu26)** - **[Dante Reinaudo](https://github.com/DanteReinaudo)** - **[Lucía Pardo](https://github.com/luciaPardo)** - **[Roberto Iskandarani](https://github.com/Robert-Iskandarani)** ## Model Details * **Base Mode**: dccuchile/bert-base-spanish-wwm-uncased * **Hyperparameters**: * **dropout_rate = 0.1** * **num_classes = 2** * **max_length = 128** * **batch_size = 16** * **num_epochs = 5** * **learning_rate = 3e-5** * **Dataset**: 125,000 Spanish news articles (True and False) ## Metrics The model's performance was evaluated using the following metrics: * **Accuracy = _83.17%_** * **F1-Score = _81.94%_** * **Precision = _85.62%_** * **Recall = _81.10%_** ## Usage ### Installation You can install the required dependencies using pip: ```bash pip install transformers torch ``` ### Loading the Model ```python from transformers import BertForSequenceClassification, BertTokenizer model = BertForSequenceClassification.from_pretrained("VerificadoProfesional/SaBERT-Spanish-Fake-News") tokenizer = BertTokenizer.from_pretrained("VerificadoProfesional/SaBERT-Spanish-Fake-News") ``` ### Predict Function ```python def predict(model,tokenizer,text,threshold = 0.5): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probabilities = torch.softmax(logits, dim=1).squeeze().tolist() predicted_class = torch.argmax(logits, dim=1).item() if probabilities[predicted_class] <= threshold and predicted_class == 1: predicted_class = 0 return bool(predicted_class), probabilities ``` ### Making Predictions ```python text = "Your Spanish news text here" predicted_label,probabilities = predict(model,tokenizer,text) print(f"Text: {text}") print(f"Predicted Class: {predicted_label}") print(f"Probabilities: {probabilities}") ``` ## License Apache License 2.0 ## Acknowledgments Special thanks to DCC UChile for the base Spanish BERT model and to all contributors to the dataset used for training.
{"language": ["es"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "widget": [{"text": "La tierra es Plana", "output": [{"label": "False", "score": 0.882}, {"label": "True", "score": 0.118}]}]}
VerificadoProfesional/SaBERT-Spanish-Fake-News
null
[ "transformers", "safetensors", "bert", "text-classification", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:38:23+00:00
null
null
{"license": "openrail"}
Zavid/Knyaz
null
[ "license:openrail", "region:us" ]
null
2024-04-24T13:38:41+00:00
null
null
{"license": "openrail"}
Zavid/Krug
null
[ "license:openrail", "region:us" ]
null
2024-04-24T13:41:48+00:00
text-classification
adapter-transformers
{"language": ["ms"], "license": "apache-2.0", "library_name": "adapter-transformers", "tags": ["art"], "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["accuracy"], "pipeline_tag": "text-classification"}
Gnoliz/ACE-1
null
[ "adapter-transformers", "art", "text-classification", "ms", "dataset:HuggingFaceFW/fineweb", "license:apache-2.0", "region:us" ]
null
2024-04-24T13:41:50+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_synDB_da This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3963 | 0.82 | 50 | 0.1464 | | 0.1987 | 1.22 | 75 | 0.1222 | | 0.1286 | 1.63 | 100 | 0.0964 | | 0.1132 | 2.04 | 125 | 0.1117 | | 0.0803 | 2.45 | 150 | 0.0801 | | 0.068 | 2.86 | 175 | 0.0804 | | 0.0567 | 3.27 | 200 | 0.0521 | | 0.0495 | 3.67 | 225 | 0.0727 | | 0.0436 | 4.08 | 250 | 0.0681 | | 0.0425 | 4.49 | 275 | 0.0754 | | 0.0361 | 4.9 | 300 | 0.0747 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_da", "results": []}]}
Donut01/donut_synDB_da
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:42:27+00:00
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-med-LoRA_nosie_128_256_45k This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1914 - Wer: 8.6601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.5111 | 1.0 | 2863 | 0.2845 | 11.9909 | | 0.2265 | 2.0 | 5726 | 0.2335 | 10.3921 | | 0.1772 | 3.0 | 8589 | 0.2106 | 9.4024 | | 0.1495 | 4.0 | 11452 | 0.1959 | 9.0027 | | 0.1331 | 5.0 | 14315 | 0.1914 | 8.6601 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-medium", "model-index": [{"name": "whisper-med-LoRA_nosie_128_256_45k", "results": []}]}
adityarra07/whisper-med-LoRA_nosie_128_256_45k
null
[ "generated_from_trainer", "base_model:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2024-04-24T13:42:35+00:00
null
transformers
{}
TitanML/dummy_model
null
[ "transformers", "safetensors", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:42:58+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ThatOneSkyler/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
ThatOneSkyler/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-24T13:43:14+00:00
null
transformers
# Jina V2 Embed Model Reupload of the jina embedding model that removes the dependence on onnx and optimum, by recreating it with a custom class in Takeoff.
{}
TitanML/jina-v2-code-embed
null
[ "transformers", "safetensors", "bert", "custom_code", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:43:34+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLAMA3-8BI-APPS This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9027 | 0.1 | 100 | 0.9320 | | 0.8632 | 0.2 | 200 | 0.9143 | | 0.8572 | 0.3 | 300 | 1.0150 | | 0.937 | 0.4 | 400 | 1.0545 | | 1.0336 | 0.5 | 500 | 1.1029 | | 1.0056 | 0.6 | 600 | 1.1267 | | 1.0125 | 0.7 | 700 | 1.1307 | | 1.028 | 0.8 | 800 | 1.1398 | | 1.0692 | 0.9 | 900 | 1.1482 | | 1.0361 | 1.0 | 1000 | 1.1490 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "LLAMA3-8BI-APPS", "results": []}]}
AdnanRiaz107/CodeLLAMA3-8BI-APPS
null
[ "peft", "safetensors", "llama", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-24T13:43:38+00:00
null
null
{}
samzirbo/mT5.scratch.europarl.simple
null
[ "region:us" ]
null
2024-04-24T13:43:41+00:00
text-generation
transformers
{}
Weni/WeniGPT-Agents-Mistral-1.0.14-SFT-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T13:44:12+00:00
feature-extraction
sentence-transformers
<!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-en` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset. The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi. This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc. With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference. Additionally, we provide the following embedding models: - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings. - [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): Spanish-English Bilingual embeddings. ## Data & Parameters Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923) ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-small-en') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from transformers package. First, you need to make sure that you are logged into huggingface. You can either use the huggingface-cli tool (after installing the `transformers` package) and pass your [hugginface access token](https://huggingface.co/docs/hub/security-tokens): ```bash huggingface-cli login ``` Alternatively, you can provide the access token as an environment variable in the shell: ```bash export HF_TOKEN="<your token here>" ``` or in Python: ```python import os os.environ['HF_TOKEN'] = "<your token here>" ``` Then, you can use load and use the model via the `AutoModel` class: ```python !pip install transformers from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well): ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "jinaai/jina-embeddings-v2-base-en", # switch to en/zh for English or Chinese trust_remote_code=True ) # control your input sequence length up to 8192 model.max_seq_length = 1024 embeddings = model.encode([ 'How is the weather today?', 'What is the current weather like today?' ]) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Alternatives to Using Transformers (or SentencTransformers) Package 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Plans 1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese. 2. Multimodal embedding models enable Multimodal RAG applications. 3. High-performt rerankers. ## Trouble Shooting **Loading of Model Code failed** If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized. This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model: ```bash Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-en were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ... ``` **User is not logged into Huggingface** The model is only availabe under [gated access](https://huggingface.co/docs/hub/models-gated). This means you need to be logged into huggingface load load it. If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above: ```bash OSError: jinaai/jina-embeddings-v2-base-en is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @misc{günther2023jina, title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents}, author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao}, year={2023}, eprint={2310.19923}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "datasets": ["allenai/c4"], "inference": false, "model-index": [{"name": "jina-embedding-b-en-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 74.73134328358209}, {"type": "ap", "value": 37.765427081831035}, {"type": "f1", "value": 68.79367444339518}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 88.544275}, {"type": "ap", "value": 84.61328675662887}, {"type": "f1", "value": 88.51879035862375}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 45.263999999999996}, {"type": "f1", "value": 43.778759656699435}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.693}, {"type": "map_at_10", "value": 35.487}, {"type": "map_at_100", "value": 36.862}, {"type": "map_at_1000", "value": 36.872}, {"type": "map_at_3", "value": 30.049999999999997}, {"type": "map_at_5", "value": 32.966}, {"type": "mrr_at_1", "value": 21.977}, {"type": "mrr_at_10", "value": 35.565999999999995}, {"type": "mrr_at_100", "value": 36.948}, {"type": "mrr_at_1000", "value": 36.958}, {"type": "mrr_at_3", "value": 30.121}, {"type": "mrr_at_5", "value": 33.051}, {"type": "ndcg_at_1", "value": 21.693}, {"type": "ndcg_at_10", "value": 44.181}, {"type": "ndcg_at_100", "value": 49.982}, {"type": "ndcg_at_1000", "value": 50.233000000000004}, {"type": "ndcg_at_3", "value": 32.830999999999996}, {"type": "ndcg_at_5", "value": 38.080000000000005}, {"type": "precision_at_1", "value": 21.693}, {"type": "precision_at_10", "value": 7.248}, {"type": "precision_at_100", "value": 0.9769999999999999}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 13.632}, {"type": "precision_at_5", "value": 10.725}, {"type": "recall_at_1", "value": 21.693}, {"type": "recall_at_10", "value": 72.475}, {"type": "recall_at_100", "value": 97.653}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 40.896}, {"type": "recall_at_5", "value": 53.627}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 45.39242428696777}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 36.675626784714}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.247725694904034}, {"type": "mrr", "value": 74.91359978894604}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 82.68003802970496}, {"type": "cos_sim_spearman", "value": 81.23438110096286}, {"type": "euclidean_pearson", "value": 81.87462986142582}, {"type": "euclidean_spearman", "value": 81.23438110096286}, {"type": "manhattan_pearson", "value": 81.61162566600755}, {"type": "manhattan_spearman", "value": 81.11329400456184}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.01298701298701}, {"type": "f1", "value": 83.31690714969382}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 37.050108150972086}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 30.15731442819715}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.391999999999996}, {"type": "map_at_10", "value": 42.597}, {"type": "map_at_100", "value": 44.07}, {"type": "map_at_1000", "value": 44.198}, {"type": "map_at_3", "value": 38.957}, {"type": "map_at_5", "value": 40.961}, {"type": "mrr_at_1", "value": 37.196}, {"type": "mrr_at_10", "value": 48.152}, {"type": "mrr_at_100", "value": 48.928}, {"type": "mrr_at_1000", "value": 48.964999999999996}, {"type": "mrr_at_3", "value": 45.446}, {"type": "mrr_at_5", "value": 47.205999999999996}, {"type": "ndcg_at_1", "value": 37.196}, {"type": "ndcg_at_10", "value": 49.089}, {"type": "ndcg_at_100", "value": 54.471000000000004}, {"type": "ndcg_at_1000", "value": 56.385}, {"type": "ndcg_at_3", "value": 43.699}, {"type": "ndcg_at_5", "value": 46.22}, {"type": "precision_at_1", "value": 37.196}, {"type": "precision_at_10", "value": 9.313}, {"type": "precision_at_100", "value": 1.478}, {"type": "precision_at_1000", "value": 0.198}, {"type": "precision_at_3", "value": 20.839}, {"type": "precision_at_5", "value": 14.936}, {"type": "recall_at_1", "value": 31.391999999999996}, {"type": "recall_at_10", "value": 61.876}, {"type": "recall_at_100", "value": 84.214}, {"type": "recall_at_1000", "value": 95.985}, {"type": "recall_at_3", "value": 46.6}, {"type": "recall_at_5", "value": 53.588}, {"type": "map_at_1", "value": 29.083}, {"type": "map_at_10", "value": 38.812999999999995}, {"type": "map_at_100", "value": 40.053}, {"type": "map_at_1000", "value": 40.188}, {"type": "map_at_3", "value": 36.111}, {"type": "map_at_5", "value": 37.519000000000005}, {"type": "mrr_at_1", "value": 36.497}, {"type": "mrr_at_10", "value": 44.85}, {"type": "mrr_at_100", "value": 45.546}, {"type": "mrr_at_1000", "value": 45.593}, {"type": "mrr_at_3", "value": 42.686}, {"type": "mrr_at_5", "value": 43.909}, {"type": "ndcg_at_1", "value": 36.497}, {"type": "ndcg_at_10", "value": 44.443}, {"type": "ndcg_at_100", "value": 48.979}, {"type": "ndcg_at_1000", "value": 51.154999999999994}, {"type": "ndcg_at_3", "value": 40.660000000000004}, {"type": "ndcg_at_5", "value": 42.193000000000005}, {"type": "precision_at_1", "value": 36.497}, {"type": "precision_at_10", "value": 8.433}, {"type": "precision_at_100", "value": 1.369}, {"type": "precision_at_1000", "value": 0.185}, {"type": "precision_at_3", "value": 19.894000000000002}, {"type": "precision_at_5", "value": 13.873}, {"type": "recall_at_1", "value": 29.083}, {"type": "recall_at_10", "value": 54.313}, {"type": "recall_at_100", "value": 73.792}, {"type": "recall_at_1000", "value": 87.629}, {"type": "recall_at_3", "value": 42.257}, {"type": "recall_at_5", "value": 47.066}, {"type": "map_at_1", "value": 38.556000000000004}, {"type": "map_at_10", "value": 50.698}, {"type": "map_at_100", "value": 51.705}, {"type": "map_at_1000", "value": 51.768}, {"type": "map_at_3", "value": 47.848}, {"type": "map_at_5", "value": 49.358000000000004}, {"type": "mrr_at_1", "value": 43.95}, {"type": "mrr_at_10", "value": 54.191}, {"type": "mrr_at_100", "value": 54.852999999999994}, {"type": "mrr_at_1000", "value": 54.885}, {"type": "mrr_at_3", "value": 51.954}, {"type": "mrr_at_5", "value": 53.13}, {"type": "ndcg_at_1", "value": 43.95}, {"type": "ndcg_at_10", "value": 56.516}, {"type": "ndcg_at_100", "value": 60.477000000000004}, {"type": "ndcg_at_1000", "value": 61.746}, {"type": "ndcg_at_3", "value": 51.601}, {"type": "ndcg_at_5", "value": 53.795}, {"type": "precision_at_1", "value": 43.95}, {"type": "precision_at_10", "value": 9.009}, {"type": "precision_at_100", "value": 1.189}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 22.989}, {"type": "precision_at_5", "value": 15.473}, {"type": "recall_at_1", "value": 38.556000000000004}, {"type": "recall_at_10", "value": 70.159}, {"type": "recall_at_100", "value": 87.132}, {"type": "recall_at_1000", "value": 96.16}, {"type": "recall_at_3", "value": 56.906}, {"type": "recall_at_5", "value": 62.332}, {"type": "map_at_1", "value": 24.238}, {"type": "map_at_10", "value": 32.5}, {"type": "map_at_100", "value": 33.637}, {"type": "map_at_1000", "value": 33.719}, {"type": "map_at_3", "value": 30.026999999999997}, {"type": "map_at_5", "value": 31.555}, {"type": "mrr_at_1", "value": 26.328000000000003}, {"type": "mrr_at_10", "value": 34.44}, {"type": "mrr_at_100", "value": 35.455999999999996}, {"type": "mrr_at_1000", "value": 35.521}, {"type": "mrr_at_3", "value": 32.034}, {"type": "mrr_at_5", "value": 33.565}, {"type": "ndcg_at_1", "value": 26.328000000000003}, {"type": "ndcg_at_10", "value": 37.202}, {"type": "ndcg_at_100", "value": 42.728}, {"type": "ndcg_at_1000", "value": 44.792}, {"type": "ndcg_at_3", "value": 32.368}, {"type": "ndcg_at_5", "value": 35.008}, {"type": "precision_at_1", "value": 26.328000000000003}, {"type": "precision_at_10", "value": 5.7059999999999995}, {"type": "precision_at_100", "value": 0.8880000000000001}, {"type": "precision_at_1000", "value": 0.11100000000000002}, {"type": "precision_at_3", "value": 13.672}, {"type": "precision_at_5", "value": 9.74}, {"type": "recall_at_1", "value": 24.238}, {"type": "recall_at_10", "value": 49.829}, {"type": "recall_at_100", "value": 75.21}, {"type": "recall_at_1000", "value": 90.521}, {"type": "recall_at_3", "value": 36.867}, {"type": "recall_at_5", "value": 43.241}, {"type": "map_at_1", "value": 15.378}, {"type": "map_at_10", "value": 22.817999999999998}, {"type": "map_at_100", "value": 23.977999999999998}, {"type": "map_at_1000", "value": 24.108}, {"type": "map_at_3", "value": 20.719}, {"type": "map_at_5", "value": 21.889}, {"type": "mrr_at_1", "value": 19.03}, {"type": "mrr_at_10", "value": 27.022000000000002}, {"type": "mrr_at_100", "value": 28.011999999999997}, {"type": "mrr_at_1000", "value": 28.096}, {"type": "mrr_at_3", "value": 24.855}, {"type": "mrr_at_5", "value": 26.029999999999998}, {"type": "ndcg_at_1", "value": 19.03}, {"type": "ndcg_at_10", "value": 27.526}, {"type": "ndcg_at_100", "value": 33.040000000000006}, {"type": "ndcg_at_1000", "value": 36.187000000000005}, {"type": "ndcg_at_3", "value": 23.497}, {"type": "ndcg_at_5", "value": 25.334}, {"type": "precision_at_1", "value": 19.03}, {"type": "precision_at_10", "value": 4.963}, {"type": "precision_at_100", "value": 0.893}, {"type": "precision_at_1000", "value": 0.13}, {"type": "precision_at_3", "value": 11.360000000000001}, {"type": "precision_at_5", "value": 8.134}, {"type": "recall_at_1", "value": 15.378}, {"type": "recall_at_10", "value": 38.061}, {"type": "recall_at_100", "value": 61.754}, {"type": "recall_at_1000", "value": 84.259}, {"type": "recall_at_3", "value": 26.788}, {"type": "recall_at_5", "value": 31.326999999999998}, {"type": "map_at_1", "value": 27.511999999999997}, {"type": "map_at_10", "value": 37.429}, {"type": "map_at_100", "value": 38.818000000000005}, {"type": "map_at_1000", "value": 38.924}, {"type": "map_at_3", "value": 34.625}, {"type": "map_at_5", "value": 36.064}, {"type": "mrr_at_1", "value": 33.300999999999995}, {"type": "mrr_at_10", "value": 43.036}, {"type": "mrr_at_100", "value": 43.894}, {"type": "mrr_at_1000", "value": 43.936}, {"type": "mrr_at_3", "value": 40.825}, {"type": "mrr_at_5", "value": 42.028}, {"type": "ndcg_at_1", "value": 33.300999999999995}, {"type": "ndcg_at_10", "value": 43.229}, {"type": "ndcg_at_100", "value": 48.992000000000004}, {"type": "ndcg_at_1000", "value": 51.02100000000001}, {"type": "ndcg_at_3", "value": 38.794000000000004}, {"type": "ndcg_at_5", "value": 40.65}, {"type": "precision_at_1", "value": 33.300999999999995}, {"type": "precision_at_10", "value": 7.777000000000001}, {"type": "precision_at_100", "value": 1.269}, {"type": "precision_at_1000", "value": 0.163}, {"type": "precision_at_3", "value": 18.351}, {"type": "precision_at_5", "value": 12.762}, {"type": "recall_at_1", "value": 27.511999999999997}, {"type": "recall_at_10", "value": 54.788000000000004}, {"type": "recall_at_100", "value": 79.105}, {"type": "recall_at_1000", "value": 92.49199999999999}, {"type": "recall_at_3", "value": 41.924}, {"type": "recall_at_5", "value": 47.026}, {"type": "map_at_1", "value": 24.117}, {"type": "map_at_10", "value": 33.32}, {"type": "map_at_100", "value": 34.677}, {"type": "map_at_1000", "value": 34.78}, {"type": "map_at_3", "value": 30.233999999999998}, {"type": "map_at_5", "value": 31.668000000000003}, {"type": "mrr_at_1", "value": 29.566}, {"type": "mrr_at_10", "value": 38.244}, {"type": "mrr_at_100", "value": 39.245000000000005}, {"type": "mrr_at_1000", "value": 39.296}, {"type": "mrr_at_3", "value": 35.864000000000004}, {"type": "mrr_at_5", "value": 36.919999999999995}, {"type": "ndcg_at_1", "value": 29.566}, {"type": "ndcg_at_10", "value": 39.127}, {"type": "ndcg_at_100", "value": 44.989000000000004}, {"type": "ndcg_at_1000", "value": 47.189}, {"type": "ndcg_at_3", "value": 34.039}, {"type": "ndcg_at_5", "value": 35.744}, {"type": "precision_at_1", "value": 29.566}, {"type": "precision_at_10", "value": 7.385999999999999}, {"type": "precision_at_100", "value": 1.204}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 16.286}, {"type": "precision_at_5", "value": 11.484}, {"type": "recall_at_1", "value": 24.117}, {"type": "recall_at_10", "value": 51.559999999999995}, {"type": "recall_at_100", "value": 77.104}, {"type": "recall_at_1000", "value": 91.79899999999999}, {"type": "recall_at_3", "value": 36.82}, {"type": "recall_at_5", "value": 41.453}, {"type": "map_at_1", "value": 25.17625}, {"type": "map_at_10", "value": 34.063916666666664}, {"type": "map_at_100", "value": 35.255500000000005}, {"type": "map_at_1000", "value": 35.37275}, {"type": "map_at_3", "value": 31.351666666666667}, {"type": "map_at_5", "value": 32.80608333333333}, {"type": "mrr_at_1", "value": 29.59783333333333}, {"type": "mrr_at_10", "value": 38.0925}, {"type": "mrr_at_100", "value": 38.957249999999995}, {"type": "mrr_at_1000", "value": 39.01608333333333}, {"type": "mrr_at_3", "value": 35.77625}, {"type": "mrr_at_5", "value": 37.04991666666667}, {"type": "ndcg_at_1", "value": 29.59783333333333}, {"type": "ndcg_at_10", "value": 39.343666666666664}, {"type": "ndcg_at_100", "value": 44.488249999999994}, {"type": "ndcg_at_1000", "value": 46.83358333333334}, {"type": "ndcg_at_3", "value": 34.69708333333333}, {"type": "ndcg_at_5", "value": 36.75075}, {"type": "precision_at_1", "value": 29.59783333333333}, {"type": "precision_at_10", "value": 6.884083333333332}, {"type": "precision_at_100", "value": 1.114}, {"type": "precision_at_1000", "value": 0.15108333333333332}, {"type": "precision_at_3", "value": 15.965250000000003}, {"type": "precision_at_5", "value": 11.246500000000001}, {"type": "recall_at_1", "value": 25.17625}, {"type": "recall_at_10", "value": 51.015999999999984}, {"type": "recall_at_100", "value": 73.60174999999998}, {"type": "recall_at_1000", "value": 89.849}, {"type": "recall_at_3", "value": 37.88399999999999}, {"type": "recall_at_5", "value": 43.24541666666666}, {"type": "map_at_1", "value": 24.537}, {"type": "map_at_10", "value": 31.081999999999997}, {"type": "map_at_100", "value": 32.042}, {"type": "map_at_1000", "value": 32.141}, {"type": "map_at_3", "value": 29.137}, {"type": "map_at_5", "value": 30.079}, {"type": "mrr_at_1", "value": 27.454}, {"type": "mrr_at_10", "value": 33.694}, {"type": "mrr_at_100", "value": 34.579}, {"type": "mrr_at_1000", "value": 34.649}, {"type": "mrr_at_3", "value": 32.004}, {"type": "mrr_at_5", "value": 32.794000000000004}, {"type": "ndcg_at_1", "value": 27.454}, {"type": "ndcg_at_10", "value": 34.915}, {"type": "ndcg_at_100", "value": 39.641}, {"type": "ndcg_at_1000", "value": 42.105}, {"type": "ndcg_at_3", "value": 31.276}, {"type": "ndcg_at_5", "value": 32.65}, {"type": "precision_at_1", "value": 27.454}, {"type": "precision_at_10", "value": 5.337}, {"type": "precision_at_100", "value": 0.8250000000000001}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 13.241}, {"type": "precision_at_5", "value": 8.895999999999999}, {"type": "recall_at_1", "value": 24.537}, {"type": "recall_at_10", "value": 44.324999999999996}, {"type": "recall_at_100", "value": 65.949}, {"type": "recall_at_1000", "value": 84.017}, {"type": "recall_at_3", "value": 33.857}, {"type": "recall_at_5", "value": 37.316}, {"type": "map_at_1", "value": 17.122}, {"type": "map_at_10", "value": 24.32}, {"type": "map_at_100", "value": 25.338}, {"type": "map_at_1000", "value": 25.462}, {"type": "map_at_3", "value": 22.064}, {"type": "map_at_5", "value": 23.322000000000003}, {"type": "mrr_at_1", "value": 20.647}, {"type": "mrr_at_10", "value": 27.858}, {"type": "mrr_at_100", "value": 28.743999999999996}, {"type": "mrr_at_1000", "value": 28.819}, {"type": "mrr_at_3", "value": 25.769}, {"type": "mrr_at_5", "value": 26.964}, {"type": "ndcg_at_1", "value": 20.647}, {"type": "ndcg_at_10", "value": 28.849999999999998}, {"type": "ndcg_at_100", "value": 33.849000000000004}, {"type": "ndcg_at_1000", "value": 36.802}, {"type": "ndcg_at_3", "value": 24.799}, {"type": "ndcg_at_5", "value": 26.682}, {"type": "precision_at_1", "value": 20.647}, {"type": "precision_at_10", "value": 5.2170000000000005}, {"type": "precision_at_100", "value": 0.906}, {"type": "precision_at_1000", "value": 0.134}, {"type": "precision_at_3", "value": 11.769}, {"type": "precision_at_5", "value": 8.486}, {"type": "recall_at_1", "value": 17.122}, {"type": "recall_at_10", "value": 38.999}, {"type": "recall_at_100", "value": 61.467000000000006}, {"type": "recall_at_1000", "value": 82.716}, {"type": "recall_at_3", "value": 27.601}, {"type": "recall_at_5", "value": 32.471}, {"type": "map_at_1", "value": 24.396}, {"type": "map_at_10", "value": 33.415}, {"type": "map_at_100", "value": 34.521}, {"type": "map_at_1000", "value": 34.631}, {"type": "map_at_3", "value": 30.703999999999997}, {"type": "map_at_5", "value": 32.166}, {"type": "mrr_at_1", "value": 28.825}, {"type": "mrr_at_10", "value": 37.397000000000006}, {"type": "mrr_at_100", "value": 38.286}, {"type": "mrr_at_1000", "value": 38.346000000000004}, {"type": "mrr_at_3", "value": 35.028}, {"type": "mrr_at_5", "value": 36.32}, {"type": "ndcg_at_1", "value": 28.825}, {"type": "ndcg_at_10", "value": 38.656}, {"type": "ndcg_at_100", "value": 43.856}, {"type": "ndcg_at_1000", "value": 46.31}, {"type": "ndcg_at_3", "value": 33.793}, {"type": "ndcg_at_5", "value": 35.909}, {"type": "precision_at_1", "value": 28.825}, {"type": "precision_at_10", "value": 6.567}, {"type": "precision_at_100", "value": 1.0330000000000001}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 15.516}, {"type": "precision_at_5", "value": 10.914}, {"type": "recall_at_1", "value": 24.396}, {"type": "recall_at_10", "value": 50.747}, {"type": "recall_at_100", "value": 73.477}, {"type": "recall_at_1000", "value": 90.801}, {"type": "recall_at_3", "value": 37.1}, {"type": "recall_at_5", "value": 42.589}, {"type": "map_at_1", "value": 25.072}, {"type": "map_at_10", "value": 34.307}, {"type": "map_at_100", "value": 35.725}, {"type": "map_at_1000", "value": 35.943999999999996}, {"type": "map_at_3", "value": 30.906}, {"type": "map_at_5", "value": 32.818000000000005}, {"type": "mrr_at_1", "value": 29.644}, {"type": "mrr_at_10", "value": 38.673}, {"type": "mrr_at_100", "value": 39.459}, {"type": "mrr_at_1000", "value": 39.527}, {"type": "mrr_at_3", "value": 35.771}, {"type": "mrr_at_5", "value": 37.332}, {"type": "ndcg_at_1", "value": 29.644}, {"type": "ndcg_at_10", "value": 40.548}, {"type": "ndcg_at_100", "value": 45.678999999999995}, {"type": "ndcg_at_1000", "value": 48.488}, {"type": "ndcg_at_3", "value": 34.887}, {"type": "ndcg_at_5", "value": 37.543}, {"type": "precision_at_1", "value": 29.644}, {"type": "precision_at_10", "value": 7.688000000000001}, {"type": "precision_at_100", "value": 1.482}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 16.206}, {"type": "precision_at_5", "value": 12.016}, {"type": "recall_at_1", "value": 25.072}, {"type": "recall_at_10", "value": 53.478}, {"type": "recall_at_100", "value": 76.07300000000001}, {"type": "recall_at_1000", "value": 93.884}, {"type": "recall_at_3", "value": 37.583}, {"type": "recall_at_5", "value": 44.464}, {"type": "map_at_1", "value": 20.712}, {"type": "map_at_10", "value": 27.467999999999996}, {"type": "map_at_100", "value": 28.502}, {"type": "map_at_1000", "value": 28.610000000000003}, {"type": "map_at_3", "value": 24.887999999999998}, {"type": "map_at_5", "value": 26.273999999999997}, {"type": "mrr_at_1", "value": 22.736}, {"type": "mrr_at_10", "value": 29.553}, {"type": "mrr_at_100", "value": 30.485}, {"type": "mrr_at_1000", "value": 30.56}, {"type": "mrr_at_3", "value": 27.078999999999997}, {"type": "mrr_at_5", "value": 28.401}, {"type": "ndcg_at_1", "value": 22.736}, {"type": "ndcg_at_10", "value": 32.023}, {"type": "ndcg_at_100", "value": 37.158}, {"type": "ndcg_at_1000", "value": 39.823}, {"type": "ndcg_at_3", "value": 26.951999999999998}, {"type": "ndcg_at_5", "value": 29.281000000000002}, {"type": "precision_at_1", "value": 22.736}, {"type": "precision_at_10", "value": 5.213}, {"type": "precision_at_100", "value": 0.832}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 11.459999999999999}, {"type": "precision_at_5", "value": 8.244}, {"type": "recall_at_1", "value": 20.712}, {"type": "recall_at_10", "value": 44.057}, {"type": "recall_at_100", "value": 67.944}, {"type": "recall_at_1000", "value": 87.925}, {"type": "recall_at_3", "value": 30.305}, {"type": "recall_at_5", "value": 36.071999999999996}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 10.181999999999999}, {"type": "map_at_10", "value": 16.66}, {"type": "map_at_100", "value": 18.273}, {"type": "map_at_1000", "value": 18.45}, {"type": "map_at_3", "value": 14.141}, {"type": "map_at_5", "value": 15.455}, {"type": "mrr_at_1", "value": 22.15}, {"type": "mrr_at_10", "value": 32.062000000000005}, {"type": "mrr_at_100", "value": 33.116}, {"type": "mrr_at_1000", "value": 33.168}, {"type": "mrr_at_3", "value": 28.827}, {"type": "mrr_at_5", "value": 30.892999999999997}, {"type": "ndcg_at_1", "value": 22.15}, {"type": "ndcg_at_10", "value": 23.532}, {"type": "ndcg_at_100", "value": 30.358}, {"type": "ndcg_at_1000", "value": 33.783}, {"type": "ndcg_at_3", "value": 19.222}, {"type": "ndcg_at_5", "value": 20.919999999999998}, {"type": "precision_at_1", "value": 22.15}, {"type": "precision_at_10", "value": 7.185999999999999}, {"type": "precision_at_100", "value": 1.433}, {"type": "precision_at_1000", "value": 0.207}, {"type": "precision_at_3", "value": 13.941}, {"type": "precision_at_5", "value": 10.906}, {"type": "recall_at_1", "value": 10.181999999999999}, {"type": "recall_at_10", "value": 28.104000000000003}, {"type": "recall_at_100", "value": 51.998999999999995}, {"type": "recall_at_1000", "value": 71.311}, {"type": "recall_at_3", "value": 17.698}, {"type": "recall_at_5", "value": 22.262999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.669}, {"type": "map_at_10", "value": 15.552}, {"type": "map_at_100", "value": 21.865000000000002}, {"type": "map_at_1000", "value": 23.268}, {"type": "map_at_3", "value": 11.309}, {"type": "map_at_5", "value": 13.084000000000001}, {"type": "mrr_at_1", "value": 55.50000000000001}, {"type": "mrr_at_10", "value": 66.46600000000001}, {"type": "mrr_at_100", "value": 66.944}, {"type": "mrr_at_1000", "value": 66.956}, {"type": "mrr_at_3", "value": 64.542}, {"type": "mrr_at_5", "value": 65.717}, {"type": "ndcg_at_1", "value": 44.75}, {"type": "ndcg_at_10", "value": 35.049}, {"type": "ndcg_at_100", "value": 39.073}, {"type": "ndcg_at_1000", "value": 46.208}, {"type": "ndcg_at_3", "value": 39.525}, {"type": "ndcg_at_5", "value": 37.156}, {"type": "precision_at_1", "value": 55.50000000000001}, {"type": "precision_at_10", "value": 27.800000000000004}, {"type": "precision_at_100", "value": 9.013}, {"type": "precision_at_1000", "value": 1.8800000000000001}, {"type": "precision_at_3", "value": 42.667}, {"type": "precision_at_5", "value": 36.0}, {"type": "recall_at_1", "value": 6.669}, {"type": "recall_at_10", "value": 21.811}, {"type": "recall_at_100", "value": 45.112}, {"type": "recall_at_1000", "value": 67.806}, {"type": "recall_at_3", "value": 13.373}, {"type": "recall_at_5", "value": 16.615}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 48.769999999999996}, {"type": "f1", "value": 42.91448356376592}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 54.013}, {"type": "map_at_10", "value": 66.239}, {"type": "map_at_100", "value": 66.62599999999999}, {"type": "map_at_1000", "value": 66.644}, {"type": "map_at_3", "value": 63.965}, {"type": "map_at_5", "value": 65.45400000000001}, {"type": "mrr_at_1", "value": 58.221000000000004}, {"type": "mrr_at_10", "value": 70.43700000000001}, {"type": "mrr_at_100", "value": 70.744}, {"type": "mrr_at_1000", "value": 70.75099999999999}, {"type": "mrr_at_3", "value": 68.284}, {"type": "mrr_at_5", "value": 69.721}, {"type": "ndcg_at_1", "value": 58.221000000000004}, {"type": "ndcg_at_10", "value": 72.327}, {"type": "ndcg_at_100", "value": 73.953}, {"type": "ndcg_at_1000", "value": 74.312}, {"type": "ndcg_at_3", "value": 68.062}, {"type": "ndcg_at_5", "value": 70.56400000000001}, {"type": "precision_at_1", "value": 58.221000000000004}, {"type": "precision_at_10", "value": 9.521}, {"type": "precision_at_100", "value": 1.045}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 27.348}, {"type": "precision_at_5", "value": 17.794999999999998}, {"type": "recall_at_1", "value": 54.013}, {"type": "recall_at_10", "value": 86.957}, {"type": "recall_at_100", "value": 93.911}, {"type": "recall_at_1000", "value": 96.38}, {"type": "recall_at_3", "value": 75.555}, {"type": "recall_at_5", "value": 81.671}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.254}, {"type": "map_at_10", "value": 33.723}, {"type": "map_at_100", "value": 35.574}, {"type": "map_at_1000", "value": 35.730000000000004}, {"type": "map_at_3", "value": 29.473}, {"type": "map_at_5", "value": 31.543}, {"type": "mrr_at_1", "value": 41.358}, {"type": "mrr_at_10", "value": 49.498}, {"type": "mrr_at_100", "value": 50.275999999999996}, {"type": "mrr_at_1000", "value": 50.308}, {"type": "mrr_at_3", "value": 47.016000000000005}, {"type": "mrr_at_5", "value": 48.336}, {"type": "ndcg_at_1", "value": 41.358}, {"type": "ndcg_at_10", "value": 41.579}, {"type": "ndcg_at_100", "value": 48.455}, {"type": "ndcg_at_1000", "value": 51.165000000000006}, {"type": "ndcg_at_3", "value": 37.681}, {"type": "ndcg_at_5", "value": 38.49}, {"type": "precision_at_1", "value": 41.358}, {"type": "precision_at_10", "value": 11.543000000000001}, {"type": "precision_at_100", "value": 1.87}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 24.743000000000002}, {"type": "precision_at_5", "value": 17.994}, {"type": "recall_at_1", "value": 21.254}, {"type": "recall_at_10", "value": 48.698}, {"type": "recall_at_100", "value": 74.588}, {"type": "recall_at_1000", "value": 91.00200000000001}, {"type": "recall_at_3", "value": 33.939}, {"type": "recall_at_5", "value": 39.367000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 35.922}, {"type": "map_at_10", "value": 52.32599999999999}, {"type": "map_at_100", "value": 53.18000000000001}, {"type": "map_at_1000", "value": 53.245}, {"type": "map_at_3", "value": 49.294}, {"type": "map_at_5", "value": 51.202999999999996}, {"type": "mrr_at_1", "value": 71.843}, {"type": "mrr_at_10", "value": 78.24600000000001}, {"type": "mrr_at_100", "value": 78.515}, {"type": "mrr_at_1000", "value": 78.527}, {"type": "mrr_at_3", "value": 77.17500000000001}, {"type": "mrr_at_5", "value": 77.852}, {"type": "ndcg_at_1", "value": 71.843}, {"type": "ndcg_at_10", "value": 61.379}, {"type": "ndcg_at_100", "value": 64.535}, {"type": "ndcg_at_1000", "value": 65.888}, {"type": "ndcg_at_3", "value": 56.958}, {"type": "ndcg_at_5", "value": 59.434}, {"type": "precision_at_1", "value": 71.843}, {"type": "precision_at_10", "value": 12.686}, {"type": "precision_at_100", "value": 1.517}, {"type": "precision_at_1000", "value": 0.16999999999999998}, {"type": "precision_at_3", "value": 35.778}, {"type": "precision_at_5", "value": 23.422}, {"type": "recall_at_1", "value": 35.922}, {"type": "recall_at_10", "value": 63.43}, {"type": "recall_at_100", "value": 75.868}, {"type": "recall_at_1000", "value": 84.88900000000001}, {"type": "recall_at_3", "value": 53.666000000000004}, {"type": "recall_at_5", "value": 58.555}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 79.4408}, {"type": "ap", "value": 73.52820871620366}, {"type": "f1", "value": 79.36240238685001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.826999999999998}, {"type": "map_at_10", "value": 34.04}, {"type": "map_at_100", "value": 35.226}, {"type": "map_at_1000", "value": 35.275}, {"type": "map_at_3", "value": 30.165999999999997}, {"type": "map_at_5", "value": 32.318000000000005}, {"type": "mrr_at_1", "value": 22.464000000000002}, {"type": "mrr_at_10", "value": 34.631}, {"type": "mrr_at_100", "value": 35.752}, {"type": "mrr_at_1000", "value": 35.795}, {"type": "mrr_at_3", "value": 30.798}, {"type": "mrr_at_5", "value": 32.946999999999996}, {"type": "ndcg_at_1", "value": 22.464000000000002}, {"type": "ndcg_at_10", "value": 40.919}, {"type": "ndcg_at_100", "value": 46.632}, {"type": "ndcg_at_1000", "value": 47.833}, {"type": "ndcg_at_3", "value": 32.992}, {"type": "ndcg_at_5", "value": 36.834}, {"type": "precision_at_1", "value": 22.464000000000002}, {"type": "precision_at_10", "value": 6.494}, {"type": "precision_at_100", "value": 0.9369999999999999}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 14.021}, {"type": "precision_at_5", "value": 10.347000000000001}, {"type": "recall_at_1", "value": 21.826999999999998}, {"type": "recall_at_10", "value": 62.132}, {"type": "recall_at_100", "value": 88.55199999999999}, {"type": "recall_at_1000", "value": 97.707}, {"type": "recall_at_3", "value": 40.541}, {"type": "recall_at_5", "value": 49.739}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 95.68399452804377}, {"type": "f1", "value": 95.25490609832268}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 83.15321477428182}, {"type": "f1", "value": 60.35476439087966}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 71.92669804976462}, {"type": "f1", "value": 69.22815107207565}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 74.4855413584398}, {"type": "f1", "value": 72.92107516103387}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 32.412679360205544}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 28.09211869875204}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.540919056982545}, {"type": "mrr", "value": 31.529904607063536}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.745}, {"type": "map_at_10", "value": 12.013}, {"type": "map_at_100", "value": 15.040000000000001}, {"type": "map_at_1000", "value": 16.427}, {"type": "map_at_3", "value": 8.841000000000001}, {"type": "map_at_5", "value": 10.289}, {"type": "mrr_at_1", "value": 45.201}, {"type": "mrr_at_10", "value": 53.483999999999995}, {"type": "mrr_at_100", "value": 54.20700000000001}, {"type": "mrr_at_1000", "value": 54.252}, {"type": "mrr_at_3", "value": 51.29}, {"type": "mrr_at_5", "value": 52.73}, {"type": "ndcg_at_1", "value": 43.808}, {"type": "ndcg_at_10", "value": 32.445}, {"type": "ndcg_at_100", "value": 30.031000000000002}, {"type": "ndcg_at_1000", "value": 39.007}, {"type": "ndcg_at_3", "value": 37.204}, {"type": "ndcg_at_5", "value": 35.07}, {"type": "precision_at_1", "value": 45.201}, {"type": "precision_at_10", "value": 23.684}, {"type": "precision_at_100", "value": 7.600999999999999}, {"type": "precision_at_1000", "value": 2.043}, {"type": "precision_at_3", "value": 33.953}, {"type": "precision_at_5", "value": 29.412}, {"type": "recall_at_1", "value": 5.745}, {"type": "recall_at_10", "value": 16.168}, {"type": "recall_at_100", "value": 30.875999999999998}, {"type": "recall_at_1000", "value": 62.686}, {"type": "recall_at_3", "value": 9.75}, {"type": "recall_at_5", "value": 12.413}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.828}, {"type": "map_at_10", "value": 53.239000000000004}, {"type": "map_at_100", "value": 54.035999999999994}, {"type": "map_at_1000", "value": 54.067}, {"type": "map_at_3", "value": 49.289}, {"type": "map_at_5", "value": 51.784}, {"type": "mrr_at_1", "value": 42.497}, {"type": "mrr_at_10", "value": 55.916999999999994}, {"type": "mrr_at_100", "value": 56.495}, {"type": "mrr_at_1000", "value": 56.516999999999996}, {"type": "mrr_at_3", "value": 52.800000000000004}, {"type": "mrr_at_5", "value": 54.722}, {"type": "ndcg_at_1", "value": 42.468}, {"type": "ndcg_at_10", "value": 60.437}, {"type": "ndcg_at_100", "value": 63.731}, {"type": "ndcg_at_1000", "value": 64.41799999999999}, {"type": "ndcg_at_3", "value": 53.230999999999995}, {"type": "ndcg_at_5", "value": 57.26}, {"type": "precision_at_1", "value": 42.468}, {"type": "precision_at_10", "value": 9.47}, {"type": "precision_at_100", "value": 1.1360000000000001}, {"type": "precision_at_1000", "value": 0.12}, {"type": "precision_at_3", "value": 23.724999999999998}, {"type": "precision_at_5", "value": 16.593}, {"type": "recall_at_1", "value": 37.828}, {"type": "recall_at_10", "value": 79.538}, {"type": "recall_at_100", "value": 93.646}, {"type": "recall_at_1000", "value": 98.72999999999999}, {"type": "recall_at_3", "value": 61.134}, {"type": "recall_at_5", "value": 70.377}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.548}, {"type": "map_at_10", "value": 84.466}, {"type": "map_at_100", "value": 85.10600000000001}, {"type": "map_at_1000", "value": 85.123}, {"type": "map_at_3", "value": 81.57600000000001}, {"type": "map_at_5", "value": 83.399}, {"type": "mrr_at_1", "value": 81.24}, {"type": "mrr_at_10", "value": 87.457}, {"type": "mrr_at_100", "value": 87.574}, {"type": "mrr_at_1000", "value": 87.575}, {"type": "mrr_at_3", "value": 86.507}, {"type": "mrr_at_5", "value": 87.205}, {"type": "ndcg_at_1", "value": 81.25}, {"type": "ndcg_at_10", "value": 88.203}, {"type": "ndcg_at_100", "value": 89.457}, {"type": "ndcg_at_1000", "value": 89.563}, {"type": "ndcg_at_3", "value": 85.465}, {"type": "ndcg_at_5", "value": 87.007}, {"type": "precision_at_1", "value": 81.25}, {"type": "precision_at_10", "value": 13.373}, {"type": "precision_at_100", "value": 1.5270000000000001}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.417}, {"type": "precision_at_5", "value": 24.556}, {"type": "recall_at_1", "value": 70.548}, {"type": "recall_at_10", "value": 95.208}, {"type": "recall_at_100", "value": 99.514}, {"type": "recall_at_1000", "value": 99.988}, {"type": "recall_at_3", "value": 87.214}, {"type": "recall_at_5", "value": 91.696}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 53.04822095496839}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 60.30778476474675}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.692}, {"type": "map_at_10", "value": 11.766}, {"type": "map_at_100", "value": 13.904}, {"type": "map_at_1000", "value": 14.216999999999999}, {"type": "map_at_3", "value": 8.245}, {"type": "map_at_5", "value": 9.92}, {"type": "mrr_at_1", "value": 23.0}, {"type": "mrr_at_10", "value": 33.78}, {"type": "mrr_at_100", "value": 34.922}, {"type": "mrr_at_1000", "value": 34.973}, {"type": "mrr_at_3", "value": 30.2}, {"type": "mrr_at_5", "value": 32.565}, {"type": "ndcg_at_1", "value": 23.0}, {"type": "ndcg_at_10", "value": 19.863}, {"type": "ndcg_at_100", "value": 28.141}, {"type": "ndcg_at_1000", "value": 33.549}, {"type": "ndcg_at_3", "value": 18.434}, {"type": "ndcg_at_5", "value": 16.384}, {"type": "precision_at_1", "value": 23.0}, {"type": "precision_at_10", "value": 10.39}, {"type": "precision_at_100", "value": 2.235}, {"type": "precision_at_1000", "value": 0.35300000000000004}, {"type": "precision_at_3", "value": 17.133000000000003}, {"type": "precision_at_5", "value": 14.44}, {"type": "recall_at_1", "value": 4.692}, {"type": "recall_at_10", "value": 21.025}, {"type": "recall_at_100", "value": 45.324999999999996}, {"type": "recall_at_1000", "value": 71.675}, {"type": "recall_at_3", "value": 10.440000000000001}, {"type": "recall_at_5", "value": 14.64}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.96178184892842}, {"type": "cos_sim_spearman", "value": 79.6487740813199}, {"type": "euclidean_pearson", "value": 82.06661161625023}, {"type": "euclidean_spearman", "value": 79.64876769031183}, {"type": "manhattan_pearson", "value": 82.07061164575131}, {"type": "manhattan_spearman", "value": 79.65197039464537}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.15305604100027}, {"type": "cos_sim_spearman", "value": 74.27447427941591}, {"type": "euclidean_pearson", "value": 80.52737337565307}, {"type": "euclidean_spearman", "value": 74.27416077132192}, {"type": "manhattan_pearson", "value": 80.53728571140387}, {"type": "manhattan_spearman", "value": 74.28853605753457}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.44386080639279}, {"type": "cos_sim_spearman", "value": 84.17947648159536}, {"type": "euclidean_pearson", "value": 83.34145388129387}, {"type": "euclidean_spearman", "value": 84.17947648159536}, {"type": "manhattan_pearson", "value": 83.30699061927966}, {"type": "manhattan_spearman", "value": 84.18125737380451}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 81.57392220985612}, {"type": "cos_sim_spearman", "value": 78.80745014464101}, {"type": "euclidean_pearson", "value": 80.01660371487199}, {"type": "euclidean_spearman", "value": 78.80741240102256}, {"type": "manhattan_pearson", "value": 79.96810779507953}, {"type": "manhattan_spearman", "value": 78.75600400119448}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.85421063026625}, {"type": "cos_sim_spearman", "value": 87.55320285299192}, {"type": "euclidean_pearson", "value": 86.69750143323517}, {"type": "euclidean_spearman", "value": 87.55320284326378}, {"type": "manhattan_pearson", "value": 86.63379169960379}, {"type": "manhattan_spearman", "value": 87.4815029877984}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.31314130411842}, {"type": "cos_sim_spearman", "value": 85.3489588181433}, {"type": "euclidean_pearson", "value": 84.13240933463535}, {"type": "euclidean_spearman", "value": 85.34902871403281}, {"type": "manhattan_pearson", "value": 84.01183086503559}, {"type": "manhattan_spearman", "value": 85.19316703166102}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 89.09979781689536}, {"type": "cos_sim_spearman", "value": 88.87813323759015}, {"type": "euclidean_pearson", "value": 88.65413031123792}, {"type": "euclidean_spearman", "value": 88.87813323759015}, {"type": "manhattan_pearson", "value": 88.61818758256024}, {"type": "manhattan_spearman", "value": 88.81044100494604}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 62.30693258111531}, {"type": "cos_sim_spearman", "value": 62.195516523251946}, {"type": "euclidean_pearson", "value": 62.951283701049476}, {"type": "euclidean_spearman", "value": 62.195516523251946}, {"type": "manhattan_pearson", "value": 63.068322281439535}, {"type": "manhattan_spearman", "value": 62.10621171028406}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.27092833763909}, {"type": "cos_sim_spearman", "value": 84.84429717949759}, {"type": "euclidean_pearson", "value": 84.8516966060792}, {"type": "euclidean_spearman", "value": 84.84429717949759}, {"type": "manhattan_pearson", "value": 84.82203139242881}, {"type": "manhattan_spearman", "value": 84.8358503952945}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 83.10290863981409}, {"type": "mrr", "value": 95.31168450286097}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 52.161}, {"type": "map_at_10", "value": 62.138000000000005}, {"type": "map_at_100", "value": 62.769}, {"type": "map_at_1000", "value": 62.812}, {"type": "map_at_3", "value": 59.111000000000004}, {"type": "map_at_5", "value": 60.995999999999995}, {"type": "mrr_at_1", "value": 55.333}, {"type": "mrr_at_10", "value": 63.504000000000005}, {"type": "mrr_at_100", "value": 64.036}, {"type": "mrr_at_1000", "value": 64.08}, {"type": "mrr_at_3", "value": 61.278}, {"type": "mrr_at_5", "value": 62.778}, {"type": "ndcg_at_1", "value": 55.333}, {"type": "ndcg_at_10", "value": 66.678}, {"type": "ndcg_at_100", "value": 69.415}, {"type": "ndcg_at_1000", "value": 70.453}, {"type": "ndcg_at_3", "value": 61.755}, {"type": "ndcg_at_5", "value": 64.546}, {"type": "precision_at_1", "value": 55.333}, {"type": "precision_at_10", "value": 9.033}, {"type": "precision_at_100", "value": 1.043}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 24.221999999999998}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 52.161}, {"type": "recall_at_10", "value": 79.156}, {"type": "recall_at_100", "value": 91.333}, {"type": "recall_at_1000", "value": 99.333}, {"type": "recall_at_3", "value": 66.43299999999999}, {"type": "recall_at_5", "value": 73.272}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.81287128712871}, {"type": "cos_sim_ap", "value": 95.30034785910676}, {"type": "cos_sim_f1", "value": 90.28629856850716}, {"type": "cos_sim_precision", "value": 92.36401673640168}, {"type": "cos_sim_recall", "value": 88.3}, {"type": "dot_accuracy", "value": 99.81287128712871}, {"type": "dot_ap", "value": 95.30034785910676}, {"type": "dot_f1", "value": 90.28629856850716}, {"type": "dot_precision", "value": 92.36401673640168}, {"type": "dot_recall", "value": 88.3}, {"type": "euclidean_accuracy", "value": 99.81287128712871}, {"type": "euclidean_ap", "value": 95.30034785910676}, {"type": "euclidean_f1", "value": 90.28629856850716}, {"type": "euclidean_precision", "value": 92.36401673640168}, {"type": "euclidean_recall", "value": 88.3}, {"type": "manhattan_accuracy", "value": 99.80990099009901}, {"type": "manhattan_ap", "value": 95.26880751950654}, {"type": "manhattan_f1", "value": 90.22177419354838}, {"type": "manhattan_precision", "value": 90.95528455284553}, {"type": "manhattan_recall", "value": 89.5}, {"type": "max_accuracy", "value": 99.81287128712871}, {"type": "max_ap", "value": 95.30034785910676}, {"type": "max_f1", "value": 90.28629856850716}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 58.518662504351184}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 34.96168178378587}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 52.04862593471896}, {"type": "mrr", "value": 52.97238402936932}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.092545236479946}, {"type": "cos_sim_spearman", "value": 31.599851000175498}, {"type": "dot_pearson", "value": 30.092542723901676}, {"type": "dot_spearman", "value": 31.599851000175498}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.189}, {"type": "map_at_10", "value": 1.662}, {"type": "map_at_100", "value": 9.384}, {"type": "map_at_1000", "value": 22.669}, {"type": "map_at_3", "value": 0.5559999999999999}, {"type": "map_at_5", "value": 0.9039999999999999}, {"type": "mrr_at_1", "value": 68.0}, {"type": "mrr_at_10", "value": 81.01899999999999}, {"type": "mrr_at_100", "value": 81.01899999999999}, {"type": "mrr_at_1000", "value": 81.01899999999999}, {"type": "mrr_at_3", "value": 79.333}, {"type": "mrr_at_5", "value": 80.733}, {"type": "ndcg_at_1", "value": 63.0}, {"type": "ndcg_at_10", "value": 65.913}, {"type": "ndcg_at_100", "value": 51.895}, {"type": "ndcg_at_1000", "value": 46.967}, {"type": "ndcg_at_3", "value": 65.49199999999999}, {"type": "ndcg_at_5", "value": 66.69699999999999}, {"type": "precision_at_1", "value": 68.0}, {"type": "precision_at_10", "value": 71.6}, {"type": "precision_at_100", "value": 53.66}, {"type": "precision_at_1000", "value": 21.124000000000002}, {"type": "precision_at_3", "value": 72.667}, {"type": "precision_at_5", "value": 74.0}, {"type": "recall_at_1", "value": 0.189}, {"type": "recall_at_10", "value": 1.913}, {"type": "recall_at_100", "value": 12.601999999999999}, {"type": "recall_at_1000", "value": 44.296}, {"type": "recall_at_3", "value": 0.605}, {"type": "recall_at_5", "value": 1.018}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.701}, {"type": "map_at_10", "value": 10.445}, {"type": "map_at_100", "value": 17.324}, {"type": "map_at_1000", "value": 19.161}, {"type": "map_at_3", "value": 5.497}, {"type": "map_at_5", "value": 7.278}, {"type": "mrr_at_1", "value": 30.612000000000002}, {"type": "mrr_at_10", "value": 45.534}, {"type": "mrr_at_100", "value": 45.792}, {"type": "mrr_at_1000", "value": 45.806999999999995}, {"type": "mrr_at_3", "value": 37.755}, {"type": "mrr_at_5", "value": 43.469}, {"type": "ndcg_at_1", "value": 26.531}, {"type": "ndcg_at_10", "value": 26.235000000000003}, {"type": "ndcg_at_100", "value": 39.17}, {"type": "ndcg_at_1000", "value": 51.038}, {"type": "ndcg_at_3", "value": 23.625}, {"type": "ndcg_at_5", "value": 24.338}, {"type": "precision_at_1", "value": 30.612000000000002}, {"type": "precision_at_10", "value": 24.285999999999998}, {"type": "precision_at_100", "value": 8.224}, {"type": "precision_at_1000", "value": 1.6179999999999999}, {"type": "precision_at_3", "value": 24.490000000000002}, {"type": "precision_at_5", "value": 24.898}, {"type": "recall_at_1", "value": 2.701}, {"type": "recall_at_10", "value": 17.997}, {"type": "recall_at_100", "value": 51.766999999999996}, {"type": "recall_at_1000", "value": 87.863}, {"type": "recall_at_3", "value": 6.295000000000001}, {"type": "recall_at_5", "value": 9.993}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 73.3474}, {"type": "ap", "value": 15.393431414459924}, {"type": "f1", "value": 56.466681887882416}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 62.062818336163}, {"type": "f1", "value": 62.11230840463252}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 42.464892820845115}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.15962329379508}, {"type": "cos_sim_ap", "value": 74.73674057919256}, {"type": "cos_sim_f1", "value": 68.81245642574947}, {"type": "cos_sim_precision", "value": 61.48255813953488}, {"type": "cos_sim_recall", "value": 78.12664907651715}, {"type": "dot_accuracy", "value": 86.15962329379508}, {"type": "dot_ap", "value": 74.7367634988281}, {"type": "dot_f1", "value": 68.81245642574947}, {"type": "dot_precision", "value": 61.48255813953488}, {"type": "dot_recall", "value": 78.12664907651715}, {"type": "euclidean_accuracy", "value": 86.15962329379508}, {"type": "euclidean_ap", "value": 74.7367761466634}, {"type": "euclidean_f1", "value": 68.81245642574947}, {"type": "euclidean_precision", "value": 61.48255813953488}, {"type": "euclidean_recall", "value": 78.12664907651715}, {"type": "manhattan_accuracy", "value": 86.21326816474935}, {"type": "manhattan_ap", "value": 74.64416473733951}, {"type": "manhattan_f1", "value": 68.80924855491331}, {"type": "manhattan_precision", "value": 61.23456790123457}, {"type": "manhattan_recall", "value": 78.52242744063325}, {"type": "max_accuracy", "value": 86.21326816474935}, {"type": "max_ap", "value": 74.7367761466634}, {"type": "max_f1", "value": 68.81245642574947}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.97620988085536}, {"type": "cos_sim_ap", "value": 86.08680845745758}, {"type": "cos_sim_f1", "value": 78.02793637114438}, {"type": "cos_sim_precision", "value": 73.11082699683736}, {"type": "cos_sim_recall", "value": 83.65414228518632}, {"type": "dot_accuracy", "value": 88.97620988085536}, {"type": "dot_ap", "value": 86.08681149437946}, {"type": "dot_f1", "value": 78.02793637114438}, {"type": "dot_precision", "value": 73.11082699683736}, {"type": "dot_recall", "value": 83.65414228518632}, {"type": "euclidean_accuracy", "value": 88.97620988085536}, {"type": "euclidean_ap", "value": 86.08681215460771}, {"type": "euclidean_f1", "value": 78.02793637114438}, {"type": "euclidean_precision", "value": 73.11082699683736}, {"type": "euclidean_recall", "value": 83.65414228518632}, {"type": "manhattan_accuracy", "value": 88.88888888888889}, {"type": "manhattan_ap", "value": 86.02916327562438}, {"type": "manhattan_f1", "value": 78.02063045516843}, {"type": "manhattan_precision", "value": 73.38851947346994}, {"type": "manhattan_recall", "value": 83.2768709578072}, {"type": "max_accuracy", "value": 88.97620988085536}, {"type": "max_ap", "value": 86.08681215460771}, {"type": "max_f1", "value": 78.02793637114438}]}]}]}
TitanML/jina-v2-base-en-embed
null
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:allenai/c4", "arxiv:2108.12409", "arxiv:2310.19923", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-24T13:44:18+00:00
null
null
{}
theguye/youngguy
null
[ "region:us" ]
null
2024-04-24T13:44:39+00:00
feature-extraction
transformers
{}
TitanML/tiny-mistral-embedder
null
[ "transformers", "safetensors", "mistral", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:45:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 7B β Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat - **Chatbot Arena:** Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org ## Performance At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks: | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) | |-------------|-----|----|---------------|--------------| | StableLM-Tuned-α | 7B| dSFT |2.75| -| | MPT-Chat | 7B |dSFT |5.42| -| | Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83| | Mistral-Instructv0.1 | 7B| - | 6.84 |-| | Zephyr-7b-α |7B| dDPO| 6.88| -| | **Zephyr-7b-β** 🪁 | **7B** | **dDPO** | **7.34** | **90.60** | | Falcon-Instruct | 40B |dSFT |5.17 |45.71| | Guanaco | 65B | SFT |6.41| 71.80| | Llama2-Chat | 70B |RLHF |6.86| 92.66| | Vicuna v1.3 | 33B |dSFT |7.12 |88.99| | WizardLM v1.0 | 70B |dSFT |7.71 |-| | Xwin-LM v0.1 | 70B |dPPO |- |95.57| | GPT-3.5-turbo | - |RLHF |7.94 |89.37| | Claude 2 | - |RLHF |8.06| 91.36| | GPT-4 | -| RLHF |8.99| 95.28| In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/raxvt5ma16d7T23my34WC.png) However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap. ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities. You can find the datasets used for training Zephyr-7B-β [here](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66) Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training and evaluation data During DPO training, this model achieves the following results on the evaluation set: - Loss: 0.7496 - Rewards/chosen: -4.5221 - Rewards/rejected: -8.3184 - Rewards/accuracies: 0.7812 - Rewards/margins: 3.7963 - Logps/rejected: -340.1541 - Logps/chosen: -299.4561 - Logits/rejected: -2.3081 - Logits/chosen: -2.3531 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results The table below shows the full set of DPO training metrics: | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6284 | 0.05 | 100 | 0.6098 | 0.0425 | -0.1872 | 0.7344 | 0.2297 | -258.8416 | -253.8099 | -2.7976 | -2.8234 | | 0.4908 | 0.1 | 200 | 0.5426 | -0.0279 | -0.6842 | 0.75 | 0.6563 | -263.8124 | -254.5145 | -2.7719 | -2.7960 | | 0.5264 | 0.15 | 300 | 0.5324 | 0.0414 | -0.9793 | 0.7656 | 1.0207 | -266.7627 | -253.8209 | -2.7892 | -2.8122 | | 0.5536 | 0.21 | 400 | 0.4957 | -0.0185 | -1.5276 | 0.7969 | 1.5091 | -272.2460 | -254.4203 | -2.8542 | -2.8764 | | 0.5362 | 0.26 | 500 | 0.5031 | -0.2630 | -1.5917 | 0.7812 | 1.3287 | -272.8869 | -256.8653 | -2.8702 | -2.8958 | | 0.5966 | 0.31 | 600 | 0.5963 | -0.2993 | -1.6491 | 0.7812 | 1.3499 | -273.4614 | -257.2279 | -2.8778 | -2.8986 | | 0.5014 | 0.36 | 700 | 0.5382 | -0.2859 | -1.4750 | 0.75 | 1.1891 | -271.7204 | -257.0942 | -2.7659 | -2.7869 | | 0.5334 | 0.41 | 800 | 0.5677 | -0.4289 | -1.8968 | 0.7969 | 1.4679 | -275.9378 | -258.5242 | -2.7053 | -2.7265 | | 0.5251 | 0.46 | 900 | 0.5772 | -0.2116 | -1.3107 | 0.7344 | 1.0991 | -270.0768 | -256.3507 | -2.8463 | -2.8662 | | 0.5205 | 0.52 | 1000 | 0.5262 | -0.3792 | -1.8585 | 0.7188 | 1.4793 | -275.5552 | -258.0276 | -2.7893 | -2.7979 | | 0.5094 | 0.57 | 1100 | 0.5433 | -0.6279 | -1.9368 | 0.7969 | 1.3089 | -276.3377 | -260.5136 | -2.7453 | -2.7536 | | 0.5837 | 0.62 | 1200 | 0.5349 | -0.3780 | -1.9584 | 0.7656 | 1.5804 | -276.5542 | -258.0154 | -2.7643 | -2.7756 | | 0.5214 | 0.67 | 1300 | 0.5732 | -1.0055 | -2.2306 | 0.7656 | 1.2251 | -279.2761 | -264.2903 | -2.6986 | -2.7113 | | 0.6914 | 0.72 | 1400 | 0.5137 | -0.6912 | -2.1775 | 0.7969 | 1.4863 | -278.7448 | -261.1467 | -2.7166 | -2.7275 | | 0.4655 | 0.77 | 1500 | 0.5090 | -0.7987 | -2.2930 | 0.7031 | 1.4943 | -279.8999 | -262.2220 | -2.6651 | -2.6838 | | 0.5731 | 0.83 | 1600 | 0.5312 | -0.8253 | -2.3520 | 0.7812 | 1.5268 | -280.4902 | -262.4876 | -2.6543 | -2.6728 | | 0.5233 | 0.88 | 1700 | 0.5206 | -0.4573 | -2.0951 | 0.7812 | 1.6377 | -277.9205 | -258.8084 | -2.6870 | -2.7097 | | 0.5593 | 0.93 | 1800 | 0.5231 | -0.5508 | -2.2000 | 0.7969 | 1.6492 | -278.9703 | -259.7433 | -2.6221 | -2.6519 | | 0.4967 | 0.98 | 1900 | 0.5290 | -0.5340 | -1.9570 | 0.8281 | 1.4230 | -276.5395 | -259.5749 | -2.6564 | -2.6878 | | 0.0921 | 1.03 | 2000 | 0.5368 | -1.1376 | -3.1615 | 0.7812 | 2.0239 | -288.5854 | -265.6111 | -2.6040 | -2.6345 | | 0.0733 | 1.08 | 2100 | 0.5453 | -1.1045 | -3.4451 | 0.7656 | 2.3406 | -291.4208 | -265.2799 | -2.6289 | -2.6595 | | 0.0972 | 1.14 | 2200 | 0.5571 | -1.6915 | -3.9823 | 0.8125 | 2.2908 | -296.7934 | -271.1505 | -2.6471 | -2.6709 | | 0.1058 | 1.19 | 2300 | 0.5789 | -1.0621 | -3.8941 | 0.7969 | 2.8319 | -295.9106 | -264.8563 | -2.5527 | -2.5798 | | 0.2423 | 1.24 | 2400 | 0.5455 | -1.1963 | -3.5590 | 0.7812 | 2.3627 | -292.5599 | -266.1981 | -2.5414 | -2.5784 | | 0.1177 | 1.29 | 2500 | 0.5889 | -1.8141 | -4.3942 | 0.7969 | 2.5801 | -300.9120 | -272.3761 | -2.4802 | -2.5189 | | 0.1213 | 1.34 | 2600 | 0.5683 | -1.4608 | -3.8420 | 0.8125 | 2.3812 | -295.3901 | -268.8436 | -2.4774 | -2.5207 | | 0.0889 | 1.39 | 2700 | 0.5890 | -1.6007 | -3.7337 | 0.7812 | 2.1330 | -294.3068 | -270.2423 | -2.4123 | -2.4522 | | 0.0995 | 1.45 | 2800 | 0.6073 | -1.5519 | -3.8362 | 0.8281 | 2.2843 | -295.3315 | -269.7538 | -2.4685 | -2.5050 | | 0.1145 | 1.5 | 2900 | 0.5790 | -1.7939 | -4.2876 | 0.8438 | 2.4937 | -299.8461 | -272.1744 | -2.4272 | -2.4674 | | 0.0644 | 1.55 | 3000 | 0.5735 | -1.7285 | -4.2051 | 0.8125 | 2.4766 | -299.0209 | -271.5201 | -2.4193 | -2.4574 | | 0.0798 | 1.6 | 3100 | 0.5537 | -1.7226 | -4.2850 | 0.8438 | 2.5624 | -299.8200 | -271.4610 | -2.5367 | -2.5696 | | 0.1013 | 1.65 | 3200 | 0.5575 | -1.5715 | -3.9813 | 0.875 | 2.4098 | -296.7825 | -269.9498 | -2.4926 | -2.5267 | | 0.1254 | 1.7 | 3300 | 0.5905 | -1.6412 | -4.4703 | 0.8594 | 2.8291 | -301.6730 | -270.6473 | -2.5017 | -2.5340 | | 0.085 | 1.76 | 3400 | 0.6133 | -1.9159 | -4.6760 | 0.8438 | 2.7601 | -303.7296 | -273.3941 | -2.4614 | -2.4960 | | 0.065 | 1.81 | 3500 | 0.6074 | -1.8237 | -4.3525 | 0.8594 | 2.5288 | -300.4951 | -272.4724 | -2.4597 | -2.5004 | | 0.0755 | 1.86 | 3600 | 0.5836 | -1.9252 | -4.4005 | 0.8125 | 2.4753 | -300.9748 | -273.4872 | -2.4327 | -2.4716 | | 0.0746 | 1.91 | 3700 | 0.5789 | -1.9280 | -4.4906 | 0.8125 | 2.5626 | -301.8762 | -273.5149 | -2.4686 | -2.5115 | | 0.1348 | 1.96 | 3800 | 0.6015 | -1.8658 | -4.2428 | 0.8281 | 2.3769 | -299.3976 | -272.8936 | -2.4943 | -2.5393 | | 0.0217 | 2.01 | 3900 | 0.6122 | -2.3335 | -4.9229 | 0.8281 | 2.5894 | -306.1988 | -277.5699 | -2.4841 | -2.5272 | | 0.0219 | 2.07 | 4000 | 0.6522 | -2.9890 | -6.0164 | 0.8281 | 3.0274 | -317.1334 | -284.1248 | -2.4105 | -2.4545 | | 0.0119 | 2.12 | 4100 | 0.6922 | -3.4777 | -6.6749 | 0.7969 | 3.1972 | -323.7187 | -289.0121 | -2.4272 | -2.4699 | | 0.0153 | 2.17 | 4200 | 0.6993 | -3.2406 | -6.6775 | 0.7969 | 3.4369 | -323.7453 | -286.6413 | -2.4047 | -2.4465 | | 0.011 | 2.22 | 4300 | 0.7178 | -3.7991 | -7.4397 | 0.7656 | 3.6406 | -331.3667 | -292.2260 | -2.3843 | -2.4290 | | 0.0072 | 2.27 | 4400 | 0.6840 | -3.3269 | -6.8021 | 0.8125 | 3.4752 | -324.9908 | -287.5042 | -2.4095 | -2.4536 | | 0.0197 | 2.32 | 4500 | 0.7013 | -3.6890 | -7.3014 | 0.8125 | 3.6124 | -329.9841 | -291.1250 | -2.4118 | -2.4543 | | 0.0182 | 2.37 | 4600 | 0.7476 | -3.8994 | -7.5366 | 0.8281 | 3.6372 | -332.3356 | -293.2291 | -2.4163 | -2.4565 | | 0.0125 | 2.43 | 4700 | 0.7199 | -4.0560 | -7.5765 | 0.8438 | 3.5204 | -332.7345 | -294.7952 | -2.3699 | -2.4100 | | 0.0082 | 2.48 | 4800 | 0.7048 | -3.6613 | -7.1356 | 0.875 | 3.4743 | -328.3255 | -290.8477 | -2.3925 | -2.4303 | | 0.0118 | 2.53 | 4900 | 0.6976 | -3.7908 | -7.3152 | 0.8125 | 3.5244 | -330.1224 | -292.1431 | -2.3633 | -2.4047 | | 0.0118 | 2.58 | 5000 | 0.7198 | -3.9049 | -7.5557 | 0.8281 | 3.6508 | -332.5271 | -293.2844 | -2.3764 | -2.4194 | | 0.006 | 2.63 | 5100 | 0.7506 | -4.2118 | -7.9149 | 0.8125 | 3.7032 | -336.1194 | -296.3530 | -2.3407 | -2.3860 | | 0.0143 | 2.68 | 5200 | 0.7408 | -4.2433 | -7.9802 | 0.8125 | 3.7369 | -336.7721 | -296.6682 | -2.3509 | -2.3946 | | 0.0057 | 2.74 | 5300 | 0.7552 | -4.3392 | -8.0831 | 0.7969 | 3.7439 | -337.8013 | -297.6275 | -2.3388 | -2.3842 | | 0.0138 | 2.79 | 5400 | 0.7404 | -4.2395 | -7.9762 | 0.8125 | 3.7367 | -336.7322 | -296.6304 | -2.3286 | -2.3737 | | 0.0079 | 2.84 | 5500 | 0.7525 | -4.4466 | -8.2196 | 0.7812 | 3.7731 | -339.1662 | -298.7007 | -2.3200 | -2.3641 | | 0.0077 | 2.89 | 5600 | 0.7520 | -4.5586 | -8.3485 | 0.7969 | 3.7899 | -340.4545 | -299.8206 | -2.3078 | -2.3517 | | 0.0094 | 2.94 | 5700 | 0.7527 | -4.5542 | -8.3509 | 0.7812 | 3.7967 | -340.4790 | -299.7773 | -2.3062 | -2.3510 | | 0.0054 | 2.99 | 5800 | 0.7520 | -4.5169 | -8.3079 | 0.7812 | 3.7911 | -340.0493 | -299.4038 | -2.3081 | -2.3530 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.14.0 ## Citation If you find Zephyr-7B-β is useful in your work, please cite it with: ``` @misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.15 | | ARC (25-shot) | 62.03 | | HellaSwag (10-shot) | 84.36 | | MMLU (5-shot) | 61.07 | | TruthfulQA (0-shot) | 57.45 | | Winogrande (5-shot) | 77.74 | | GSM8K (5-shot) | 12.74 | | DROP (3-shot) | 9.66 |
{"language": ["en"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized"], "base_model": "mistralai/Mistral-7B-v0.1", "widget": [{"example_title": "Pirate!", "messages": [{"role": "system", "content": "You are a pirate chatbot who always responds with Arr!"}, {"role": "user", "content": "There's a llama on my lawn, how can I get rid of him?"}], "output": {"text": "Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight, but I've got a plan that might help ye get rid of 'im. Ye'll need to gather some carrots and hay, and then lure the llama away with the promise of a tasty treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet once again. But beware, me hearty, for there may be more llamas where that one came from! Arr!"}}], "pipeline_tag": "text-generation", "model-index": [{"name": "zephyr-7b-beta", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.03071672354948, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.35570603465445, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Drop (3-Shot)", "type": "drop", "split": "validation", "args": {"num_few_shot": 3}}, "metrics": [{"type": "f1", "value": 9.66243708053691, "name": "f1 score"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 57.44916942762855}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 12.736921910538287, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.07, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.7426992896606, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AlpacaEval", "type": "tatsu-lab/alpaca_eval"}, "metrics": [{"type": "unknown", "value": 0.906, "name": "win rate"}], "source": {"url": "https://tatsu-lab.github.io/alpaca_eval/"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MT-Bench", "type": "unknown"}, "metrics": [{"type": "unknown", "value": 7.34, "name": "score"}], "source": {"url": "https://huggingface.co/spaces/lmsys/mt-bench"}}]}]}
bwuzhang/test_5
null
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2305.18290", "arxiv:2310.16944", "base_model:mistralai/Mistral-7B-v0.1", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:46:03+00:00
text-generation
transformers
{}
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-EXL2
null
[ "transformers", "safetensors", "llama", "text-generation", "two stage dpo", "dpo", "exl2", "conversational", "de", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-24T13:46:27+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/stable-lol2
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:46:35+00:00
text-generation
transformers
{}
Alignment-Lab-AI/Neural-network-medium-5b-16k
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:47:43+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hibikaze/gpt_0.084B_en-ja_step3815
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:49:08+00:00
null
null
{}
hari02/llava-1.5-7b-hf-ft-mix-vsft
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T13:49:16+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - sassad/face-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the /home/lch/face/images dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5"}
sassad/face-lora
null
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-24T13:49:45+00:00
null
null
{}
mmnga/Phi-3-mini-128k-instruct-gguf
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:mit", "region:us" ]
null
2024-04-24T13:50:51+00:00
null
transformers
# Pointwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS) A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with the **pointwise sigmoid cross-entropy loss with IPS correction** suggested by [Bekker et al.](https://arxiv.org/abs/1809.03207) and [Saito et al.](https://arxiv.org/abs/1909.03601). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import IPSCrossEncoder model = IPSCrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_ips-pointwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
{"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}}
philipphager/baidu-ultr_uva-bert_ips-pointwise
null
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:1809.03207", "arxiv:1909.03601", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:51:04+00:00
null
transformers
# hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1`](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF --model firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF --model firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m firefly-qwen1.5-en-7b-dpo-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "basemodel": "Qwen/Qwen1.5-7B"}
hus960/firefly-qwen1.5-en-7b-dpo-v0.1-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:51:23+00:00
null
null
{"license": "llama3"}
ihoryavoriv/test-8b
null
[ "license:llama3", "region:us" ]
null
2024-04-24T13:51:38+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
marcelomathias/mistral_7b_lora_equus
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:51:45+00:00
null
null
{}
YUDIdi/qwen1.5-q4
null
[ "region:us" ]
null
2024-04-24T13:51:55+00:00
text-to-image
diffusers
# Nazareth <Gallery /> ## Trigger words You should use `Atidira` to trigger the image generation. You should use `Dira` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Antiquarian/Nazareth/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "Hyper realistic, A RAW Photo of a ((nude)) girl, shirt lift,open clothes, open shirt, cleavage, nipple slip,huge breasts, boob, breast slip, underboob, sideboob, (((skin detail))), HD, perfect, perky boobs, high quality, detailed pussy, innie, perfect pussy, bright pussy, shaved pussy, no pubes, (small nipples), (small areola),young female, dark nipple, big areola, hard nipple, black nipple, very dark nipple, ultra HD, detailed nipple, photorealistic, topless, cleavage, shirt lift, big breast, large breasts, nude, naked, braless ,head scarf,clothes removed, <lora:AtidiraLoRA:1>", "parameters": {"negative_prompt": "(((smooth skin))), extra nipples, deformed body, (((deformed breast))), (((mutated breast))), deformed pussy, deformed nipples, low quality, medium quality, extra fingers, missing fingers, mutated fingers, missing nipples, missing breasts, extra breasts, missing arms, cgi, airbrush, cartoon, unequal boob size, oversized vagina, piercings, unnatural nipples, pussy hair, (((pubes))), smooth skin, dark nipples, gaussian, blur, blurry, (((hair))), (((hairs))), monochrome, "}, "output": {"url": "images/00036-2598909348.png"}}], "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "Atidira, Dira"}
Antiquarian/Nazareth
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "region:us" ]
null
2024-04-24T13:52:15+00:00
text-generation
transformers
{}
cjsanjay/llama-2-7b-gorilla-open-function_v1
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:52:27+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_bert_10K This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.2 - train_batch_size: 8 - eval_batch_size: 8 - seed: 8446 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-large-cased", "model-index": [{"name": "results_bert_10K", "results": []}]}
Elkelouizajo/bert_mnli_10K
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-large-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:52:50+00:00
null
transformers
# Listwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS) A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss with IPS correction** adopted based on the work by [Ai et al](https://arxiv.org/abs/1804.05938). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import ListwiseIPSCrossEncoder model = ListwiseIPSCrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_ips-listwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
{"license": "mit", "datasets": ["philipphager/baidu-ultr-pretrain", "philipphager/baidu-ultr_uva-mlm-ctr"], "metrics": ["log-likelihood", "dcg@1", "dcg@3", "dcg@5", "dcg@10", "ndcg@10", "mrr@10"], "co2_eq_emissions": {"emissions": 2090, "source": "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france", "training_type": "Pre-training", "geographical_location": "Grenoble, France", "hardware_used": "4 NVIDIA H100-80GB GPUs"}}
philipphager/baidu-ultr_uva-bert_ips-listwise
null
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:1804.05938", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:53:15+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "meta-llama/Llama-2-13b-chat-hf"}
bmehrba/Llama-2-13b-chat-hf-fine-tuned-adapters_Epistemic_Llama13b_0.0_Seed105
null
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-13b-chat-hf", "region:us" ]
null
2024-04-24T13:53:23+00:00
null
null
RegalHyperus' mirror of the fixed KLMv7s pretrain by SeoulStreamingStation. Go to https://huggingface.co/SeoulStreamingStation/KLMv7s for the OG
{}
RegalHyperus/KLMv7sMirror
null
[ "region:us" ]
null
2024-04-24T13:53:33+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
{"library_name": "peft", "base_model": "meta-llama/Llama-2-13b-chat-hf"}
bmehrba/Llama-2-13b-chat-hf-fine-tuned_Epistemic_Llama13b_0.0_Seed105
null
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-13b-chat-hf", "region:us" ]
null
2024-04-24T13:53:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
JFernandoGRE/mixtral_8x7b_augmenteddemocracy_dups_all4_25
null
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T13:54:06+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HPY_gpt2_v2 This model is a fine-tuned version of [ClassCat/gpt2-base-french](https://huggingface.co/ClassCat/gpt2-base-french) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 65 | 2.4314 | | No log | 2.0 | 131 | 2.1999 | | No log | 2.99 | 196 | 2.1269 | | No log | 3.96 | 260 | 2.1060 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "HPY_gpt2_v2", "results": []}]}
azizkt/HPY_gpt2_v2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T13:56:12+00:00
object-detection
transformers
{}
LuckyTemmie/detr-resnet-50_finetuned_cppe5
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:56:20+00:00
text-generation
transformers
Quantizations of https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b # From original readme This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Prompt template: Alpaca, maybe ChatML * measurement.json for quanting exl2 included. - [4.2bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-4.2bpw-exl2) - [6.5bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-6.5bpw-exl2) - [8bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-8bpw-exl2) thx mradermacher and SilverFan for * [mradermacher/WestIceLemonTeaRP-32k-GGUF](https://huggingface.co/mradermacher/WestIceLemonTeaRP-32k-GGUF) * [SilverFan/WestIceLemonTeaRP-7b-32k-GGUF](https://huggingface.co/SilverFan/WestIceLemonTeaRP-7b-32k-GGUF) ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b) * WestWizardIceLemonTeaRP * [SeverusWestLake-7B-DPO](https://huggingface.co/s3nh/SeverusWestLake-7B-DPO) * WizardIceLemonTeaRP * [Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) * [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: IceLemonTeaRP-32k-7b layer_range: [0, 32] - model: WestWizardIceLemonTeaRP layer_range: [0, 32] merge_method: slerp base_model: IceLemonTeaRP-32k-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/GX-kV-H8_zAJz5hHL8A7G.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__WestIceLemonTeaRP-32k-7b) | Metric |Value| |---------------------------------|----:| |Avg. |71.27| |AI2 Reasoning Challenge (25-Shot)|68.77| |HellaSwag (10-Shot) |86.89| |MMLU (5-Shot) |64.28| |TruthfulQA (0-shot) |62.47| |Winogrande (5-shot) |80.98| |GSM8k (5-shot) |64.22|
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "WestIceLemonTeaRP-32k-7b", "icefog72"], "inference": false, "pipeline_tag": "text-generation"}
duyntnet/WestIceLemonTeaRP-32k-7b-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "WestIceLemonTeaRP-32k-7b", "icefog72", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-24T13:56:53+00:00
null
null
{}
kumar19/gesture_to_speech
null
[ "region:us" ]
null
2024-04-24T13:57:18+00:00
text-generation
null
# Phi 3 Mini 4K Instruct GGUF **Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) **Model creator**: [Microsoft](https://huggingface.co/microsoft) This repo contains GGUF format model files for Microsoft’s Phi 3 Mini 4K Instruct. > The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. Learn more on Microsoft’s [Model page](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/). ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted with llama.cpp build 2721 (revision [28103f4](https://github.com/ggerganov/llama.cpp/commit/28103f4832e301a9c84d44ff0df9d75d46ab6c76)), using [autogguf](https://github.com/brittlewis12/autogguf). ### Prompt template ``` <|system|> {{system_prompt}}<|end|> <|user|> {{prompt}}<|end|> <|assistant|> ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluation > As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. > The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. > More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. > > The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-4K-In<br>3.8b | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.8 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 76.7 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 82.5 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 53.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 37.5 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 64.0 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 94.6 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 84.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.6 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.7 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 70.8 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 83.2 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 77.6 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 80.2 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 65.0 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 59.1 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 | | MBPP <br> 3-Shot | 53.8 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "model_name": "Phi-3-mini-4k-instruct", "base_model": "microsoft/Phi-3-mini-4k-instruct", "inference": false, "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "model_creator": "microsoft", "model_type": "phi3", "quantized_by": "brittlewis12"}
brittlewis12/Phi-3-mini-4k-instruct-GGUF
null
[ "gguf", "nlp", "code", "text-generation", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-04-24T13:59:03+00:00
image-to-image
diffusers
# BRIA 2.3 ControlNet ColorGrid Model Card BRIA 2.3 ControlNet-ColorGrid, trained on the foundation of [BRIA 2.3 Text-to-Image](https://huggingface.co/briaai/BRIA-2.3), enables the generation of high-quality images guided by a textual prompt and the extracted color grid from the input image. This allows for the creation of different scenes, all sharing the same color grid. [BRIA 2.3](https://huggingface.co/briaai/BRIA-2.3) was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content. ![ColorGrid Example](https://huggingface.co/briaai/BRIA-2.3-ControlNet-ColorGrid/resolve/main/exp_1.png) ### Model Description - **Developed by:** BRIA AI - **Model type:** [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) for Latent diffusion - **License:** [bria-2.3](https://bria.ai/bria-huggingface-model-license-agreement/) - **Model Description:** ControlNet ColorGrid for BRIA 2.3 Text-to-Image model. The model generates images guided by a spatial grid of RGB colors. - **Resources for more information:** [BRIA AI](https://bria.ai/) ### Get Access BRIA 2.3 ControlNet-ColorGrid requires access to BRIA 2.3 Text-to-Image. For more information, [click here](https://huggingface.co/briaai/BRIA-2.3). ### Code example using Diffusers ``` pip install diffusers ``` ```py from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline import torch from PIL import Image controlnet = ControlNetModel.from_pretrained( "briaai/BRIA-2.3-ControlNet-ColorGrid", torch_dtype=torch.float16 ) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "briaai/BRIA-2.3", controlnet=controlnet, torch_dtype=torch.float16, ) pipe.to("cuda") prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background" negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers" # Create ColorGrid image input_image = Image.open('pics/singer.png') control_image = input_image.resize((16, 16)).resize((1024,1024), Image.NEAREST) image = pipe(prompt=prompt, negative_prompt=negative_prompt, image=control_image, controlnet_conditioning_scale=1.0, height=1024, width=1024).images[0] ```
{"license": "other", "tags": ["text-to-image", "controlnet model", "legal liability", "commercial use"], "license_name": "bria-2.3", "license_link": "https://bria.ai/bria-huggingface-model-license-agreement/", "pipeline_tag": "image-to-image", "inference": false, "extra_gated_prompt": "This model weights by BRIA AI can be obtained after a commercial license is agreed upon. Fill in the form below and we reach out to you.", "extra_gated_fields": {"Name": "text", "Company/Org name": "text", "Org Type (Early/Growth Startup, Enterprise, Academy)": "text", "Role": "text", "Country": "text", "Email": "text", "By submitting this form, I agree to BRIA\u2019s Privacy policy and Terms & conditions, see links below": "checkbox"}}
briaai/BRIA-2.3-ControlNet-ColorGrid
null
[ "diffusers", "text-to-image", "controlnet model", "legal liability", "commercial use", "image-to-image", "license:other", "region:us" ]
null
2024-04-24T13:59:14+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HirCoir/MiniChat-1.5-3B-Sorah <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.IQ3_XS.gguf) | IQ3_XS | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.IQ3_S.gguf) | IQ3_S | 1.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q3_K_S.gguf) | Q3_K_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.IQ3_M.gguf) | IQ3_M | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q3_K_L.gguf) | Q3_K_L | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.IQ4_XS.gguf) | IQ4_XS | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q4_K_M.gguf) | Q4_K_M | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q5_K_S.gguf) | Q5_K_S | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.Q8_0.gguf) | Q8_0 | 3.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MiniChat-1.5-3B-Sorah-GGUF/resolve/main/MiniChat-1.5-3B-Sorah.f16.gguf) | f16 | 6.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["unsloth", "trl", "sft"], "base_model": "HirCoir/MiniChat-1.5-3B-Sorah", "quantized_by": "mradermacher"}
mradermacher/MiniChat-1.5-3B-Sorah-GGUF
null
[ "transformers", "gguf", "unsloth", "trl", "sft", "en", "base_model:HirCoir/MiniChat-1.5-3B-Sorah", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:00:07+00:00
text-generation
transformers
<br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) **Model date:** LLaVA-v1.6-Mistral-7B was trained in December 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
{"license": "apache-2.0", "inference": false}
TitanML/llava-v1.6-mistral-7b
null
[ "transformers", "safetensors", "llava", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2024-04-24T14:01:48+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="hossniper/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
hossniper/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-24T14:02:03+00:00
null
null
{}
bboulb/models
null
[ "region:us" ]
null
2024-04-24T14:02:38+00:00
null
null
{}
mizoru/whisper-small-ru-ORD_0.7_peft_0.1
null
[ "safetensors", "region:us" ]
null
2024-04-24T14:02:57+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: MY11111111/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
MY11111111/ppo-SnowballTarget
null
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
null
2024-04-24T14:04:06+00:00
text-generation
transformers
<br/><br/> Testing... 8bpw/h8 exl2 quantization of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) using [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) calibration dataset (l=8192, r=200). --- **ORIGINAL CARD:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png) (Maybe i'll change the waifu picture later) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. [GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62) ### ChaoticSoliloquy-4x8B ``` base_model: jeiku_Chaos_RP_l3_8B gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B - source_model: jeiku_Chaos_RP_l3_8B - source_model: openlynn_Llama-3-Soliloquy-8B - source_model: Sao10K_L3-Solana-8B-v1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B) - [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B) - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3
{"language": ["en"], "license": "llama3", "tags": ["moe"]}
JayhC/ChaoticSoliloquy-4x8B-8bpw-h8-exl2-rpcal
null
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-24T14:05:14+00:00
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-cn This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9332 - Bleu: 40.6073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "base_model": "Helsinki-NLP/opus-mt-en-zh", "model-index": [{"name": "marian-finetuned-kde4-en-to-cn", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-zh_CN", "split": "train", "args": "en-zh_CN"}, "metrics": [{"type": "bleu", "value": 40.60734916422996, "name": "Bleu"}]}]}]}
zhenchuan/marian-finetuned-kde4-en-to-cn
null
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-zh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:06:12+00:00
null
diffusers
{"license": "mit"}
nathanReitinger/MNIST-diffusion
null
[ "diffusers", "safetensors", "license:mit", "diffusers:DDPMPipeline", "region:us", "has_space" ]
null
2024-04-24T14:07:36+00:00
video-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-isl-numbers-alphabet-nouns This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4278 - Accuracy: 0.8875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 15800 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.5228 | 0.02 | 316 | 4.3514 | 0.2534 | | 3.0795 | 1.02 | 632 | 2.8515 | 0.5816 | | 1.8438 | 2.02 | 948 | 1.7508 | 0.7332 | | 1.1451 | 3.02 | 1264 | 1.1464 | 0.7390 | | 1.0637 | 4.02 | 1580 | 0.7995 | 0.7774 | | 0.7795 | 5.02 | 1896 | 0.4938 | 0.8829 | | 0.4484 | 6.02 | 2212 | 0.3833 | 0.8829 | | 0.2162 | 7.02 | 2528 | 0.2512 | 0.9155 | | 0.228 | 8.02 | 2844 | 0.1972 | 0.9309 | | 0.1711 | 9.02 | 3160 | 0.1426 | 0.9482 | | 0.2251 | 10.02 | 3476 | 0.0965 | 0.9559 | | 0.1697 | 11.02 | 3792 | 0.1141 | 0.9539 | | 0.1229 | 12.02 | 4108 | 0.1362 | 0.9539 | | 0.0676 | 13.02 | 4424 | 0.0745 | 0.9655 | | 0.1228 | 14.02 | 4740 | 0.0817 | 0.9635 | | 0.0143 | 15.02 | 5056 | 0.0615 | 0.9693 | | 0.0621 | 16.02 | 5372 | 0.0768 | 0.9597 | | 0.0597 | 17.02 | 5688 | 0.0873 | 0.9635 | | 0.0696 | 18.02 | 6004 | 0.1108 | 0.9539 | | 0.2761 | 19.02 | 6320 | 0.1413 | 0.9520 | | 0.129 | 20.02 | 6636 | 0.1471 | 0.9520 | | 0.0828 | 21.02 | 6952 | 0.0608 | 0.9674 | | 0.0544 | 22.02 | 7268 | 0.0533 | 0.9712 | | 0.0509 | 23.02 | 7584 | 0.0499 | 0.9750 | | 0.0308 | 24.02 | 7900 | 0.0956 | 0.9597 | | 0.0729 | 25.02 | 8216 | 0.0753 | 0.9731 | | 0.2328 | 26.02 | 8532 | 0.0774 | 0.9655 | | 0.1085 | 27.02 | 8848 | 0.0609 | 0.9693 | | 0.099 | 28.02 | 9164 | 0.0677 | 0.9674 | | 0.1988 | 29.02 | 9480 | 0.1415 | 0.9559 | | 0.0747 | 30.02 | 9796 | 0.0581 | 0.9712 | | 0.0556 | 31.02 | 10112 | 0.0519 | 0.9693 | | 0.0763 | 32.02 | 10428 | 0.0506 | 0.9731 | | 0.0635 | 33.02 | 10744 | 0.0492 | 0.9750 | | 0.0729 | 34.02 | 11060 | 0.0483 | 0.9693 | | 0.0692 | 35.02 | 11376 | 0.0481 | 0.9750 | | 0.1023 | 36.02 | 11692 | 0.0478 | 0.9712 | | 0.0863 | 37.02 | 12008 | 0.0479 | 0.9750 | | 0.0934 | 38.02 | 12324 | 0.0464 | 0.9712 | | 0.0927 | 39.02 | 12640 | 0.0462 | 0.9712 | | 0.0254 | 40.02 | 12956 | 0.0448 | 0.9731 | | 0.043 | 41.02 | 13272 | 0.0450 | 0.9750 | | 0.0695 | 42.02 | 13588 | 0.0448 | 0.9750 | | 0.0398 | 43.02 | 13904 | 0.0440 | 0.9770 | | 0.0455 | 44.02 | 14220 | 0.0436 | 0.9770 | | 0.0423 | 45.02 | 14536 | 0.0437 | 0.9750 | | 0.0602 | 46.02 | 14852 | 0.0438 | 0.9770 | | 0.0407 | 47.02 | 15168 | 0.0437 | 0.9750 | | 0.0435 | 48.02 | 15484 | 0.0435 | 0.9770 | | 0.0463 | 49.02 | 15800 | 0.0436 | 0.9770 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-isl-numbers-alphabet-nouns", "results": []}]}
latif98/videomae-base-finetuned-isl-numbers-alphabet-nouns
null
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:07:42+00:00
null
null
{}
Mohamedzaarat/content
null
[ "region:us" ]
null
2024-04-24T14:09:28+00:00
null
null
{"license": "apache-2.0"}
Folipe/Coroa
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T14:09:50+00:00
null
null
{}
tedad09/PolizzeDonut-Lowercase-5epochs
null
[ "region:us" ]
null
2024-04-24T14:10:07+00:00
text-classification
transformers
{}
aravindhank/tiny-bert-sst2-distilled
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:10:16+00:00
text-classification
transformers
'eval_accuracy': 0.68516, 'eval_f1': 0.6844490693226439, 'eval_precision': 0.6839923350377614, 'eval_recall': 0.68516
{}
TungLe7661/BERT650
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:11:58+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WesPro/PsykidelicLlama3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "WesPro/PsykidelicLlama3", "quantized_by": "mradermacher"}
mradermacher/PsykidelicLlama3-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:WesPro/PsykidelicLlama3", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:12:39+00:00
text-generation
null
# noeljacob/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo noeljacob/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo noeljacob/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-8b-instruct.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Hello", "messages": [{"role": "user", "content": "Hey my name is Julien! How are you?"}]}, {"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}}
NoelJacob/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-24T14:12:43+00:00
null
null
{}
LAKSHM11-G/pegasus-arxiv-pegasus_article_summarization1
null
[ "region:us" ]
null
2024-04-24T14:13:04+00:00
text-generation
transformers
# Uploaded model - **Developed by:** gutsartifical - **License:** apache-2.0 - **Finetuned from model :** NousResearch/Hermes-2-Pro-Mistral-7B This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "NousResearch/Hermes-2-Pro-Mistral-7B"}
gutsartificial/hermes-2-pro-entity-cleaning
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:14:02+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HSE_PRAVO_complexity_classifier_large This model is a fine-tuned version of [ai-forever/ruBert-large](https://huggingface.co/ai-forever/ruBert-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-large", "model-index": [{"name": "HSE_PRAVO_complexity_classifier_large", "results": []}]}
marcus2000/HSE_PRAVO_complexity_classifier_large
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-large", "region:us" ]
null
2024-04-24T14:14:40+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kangXn/enta-sb-mde
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:15:01+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
IbrahimSalah/Quran_syll_to_word
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T14:15:16+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Evidence_Retrieval_model_vi_mrc This model is a fine-tuned version of [nguyenvulebinh/vi-mrc-base](https://huggingface.co/nguyenvulebinh/vi-mrc-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5132 | 1.0 | 524 | 0.3571 | | 0.2579 | 2.0 | 1048 | 0.3149 | | 0.1958 | 3.0 | 1572 | 0.3707 | | 0.1473 | 4.0 | 2096 | 0.4362 | | 0.1066 | 5.0 | 2620 | 0.4764 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "base_model": "nguyenvulebinh/vi-mrc-base", "model-index": [{"name": "Evidence_Retrieval_model_vi_mrc", "results": []}]}
tringuyen-uit/Evidence_Retrieval_model_vi_mrc
null
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:nguyenvulebinh/vi-mrc-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:15:52+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
nicolarsen/LLama3-8B-V1
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T14:15:53+00:00
null
null
{"license": "mit"}
z-rx/testllm
null
[ "license:mit", "region:us" ]
null
2024-04-24T14:16:56+00:00
null
transformers
{"license": "mit"}
RebaiMed/Bertopic-Influencers
null
[ "transformers", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:17:46+00:00
null
null
{"license": "llama2"}
VietnamAIHub/GPTViet_32K_ContextLength_llama2_based
null
[ "license:llama2", "region:us" ]
null
2024-04-24T14:17:53+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: arcee-ai/sec-mistral-7b-instruct-1.6-epoch layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
llm-wizard/NousWizard
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T14:19:31+00:00
text-generation
transformers
# LewdPlay-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.6861808716092435 - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6628290134113985 weight: 0.5815923052193855 - layer_range: [0, 4] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.5113886163963061 - sources: - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.892655547455918 weight: 0.038732602391021484 - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.1982145486303527 - layer_range: [4, 8] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.6843011350690802 - sources: - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7817511027396784 weight: 0.13053333213489704 - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6963703515864826 weight: 0.20525481492667985 - layer_range: [8, 12] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.6983086326765777 weight: 0.5843953969574106 - sources: - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.9632895768462915 weight: 0.2101146706607748 - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.597557434542081 weight: 0.6728172621848589 - layer_range: [12, 16] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.756263557607837 weight: 0.2581423726361908 - sources: - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2116035543552448 - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.22654226422958418 - layer_range: [16, 20] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.8925914810507647 weight: 0.42243766315440867 - sources: - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7697608089825734 weight: 0.1535118632140203 - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.9886758076773643 weight: 0.3305040603868546 - layer_range: [20, 24] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.40670083428654535 - sources: - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.4542810478500622 - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.8330662483310117 weight: 0.2587495367324508 - layer_range: [24, 28] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.9845313983551542 weight: 0.40378452705975915 - sources: - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2951962192288415 - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.960315594933433 weight: 0.13142971773782525 - layer_range: [28, 32] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.30838472094518804 ``` ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
{"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["vicgalle/Roleplay-Llama-3-8B", "Undi95/Llama-3-Unholy-8B-e4", "Undi95/Llama-3-LewdPlay-8B"]}
Undi95/Llama-3-LewdPlay-8B-evo
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:vicgalle/Roleplay-Llama-3-8B", "base_model:Undi95/Llama-3-Unholy-8B-e4", "base_model:Undi95/Llama-3-LewdPlay-8B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T14:19:40+00:00
null
null
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{}
ayushyoddha/ayush_raj
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-04-24T14:19:44+00:00
null
null
{}
bboulb/my_awesome_eli5_clm-model
null
[ "region:us" ]
null
2024-04-24T14:19:45+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lakoc/voxpopuli_bpe100_cz
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:20:04+00:00
null
null
{}
LAKSHM11-G/pegasus-arxiv-pegasus_article_summarization2
null
[ "region:us" ]
null
2024-04-24T14:21:15+00:00
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
Rimyy/TentativeLlamaGsm1ep
null
[ "peft", "region:us" ]
null
2024-04-24T14:21:50+00:00
text-classification
transformers
{}
nnngoc/ms-marco-MiniLM-L-6-v2-32-2M-2
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-24T14:22:11+00:00
null
null
{}
wambugu1738/Phi-3-mini-128k-instruct-GGUF
null
[ "gguf", "region:us" ]
null
2024-04-24T14:23:30+00:00
null
null
{}
YoungPanda/gguf_3k_llama3
null
[ "gguf", "region:us" ]
null
2024-04-24T14:24:14+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/sad_llama_38
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T14:26:17+00:00
null
null
{}
YUDIdi/qwen1.5-dige
null
[ "gguf", "region:us" ]
null
2024-04-24T14:26:20+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-HH-Gemma_helpful_human_loraR64_20000_gemma2b_shuffleTrue_extractchosenFalse This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6198 - Accuracy: 0.6540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7745 | 0.06 | 250 | 0.7362 | 0.5088 | | 0.6966 | 0.11 | 500 | 0.7087 | 0.5498 | | 0.6929 | 0.17 | 750 | 0.6929 | 0.5814 | | 0.702 | 0.22 | 1000 | 0.6814 | 0.5939 | | 0.6633 | 0.28 | 1250 | 0.6735 | 0.6049 | | 0.6529 | 0.33 | 1500 | 0.6669 | 0.6094 | | 0.6487 | 0.39 | 1750 | 0.6610 | 0.6189 | | 0.6737 | 0.45 | 2000 | 0.6536 | 0.6254 | | 0.6314 | 0.5 | 2250 | 0.6501 | 0.6269 | | 0.6474 | 0.56 | 2500 | 0.6454 | 0.6304 | | 0.6225 | 0.61 | 2750 | 0.6429 | 0.6335 | | 0.6338 | 0.67 | 3000 | 0.6393 | 0.6360 | | 0.6268 | 0.72 | 3250 | 0.6360 | 0.6400 | | 0.633 | 0.78 | 3500 | 0.6346 | 0.6425 | | 0.641 | 0.83 | 3750 | 0.6305 | 0.6440 | | 0.6439 | 0.89 | 4000 | 0.6286 | 0.6470 | | 0.6123 | 0.95 | 4250 | 0.6274 | 0.6475 | | 0.6082 | 1.0 | 4500 | 0.6277 | 0.6535 | | 0.6275 | 1.06 | 4750 | 0.6267 | 0.6540 | | 0.589 | 1.11 | 5000 | 0.6276 | 0.6535 | | 0.588 | 1.17 | 5250 | 0.6297 | 0.6550 | | 0.6126 | 1.22 | 5500 | 0.6305 | 0.6535 | | 0.6216 | 1.28 | 5750 | 0.6286 | 0.6525 | | 0.6071 | 1.34 | 6000 | 0.6269 | 0.6515 | | 0.6063 | 1.39 | 6250 | 0.6271 | 0.6505 | | 0.6166 | 1.45 | 6500 | 0.6246 | 0.6525 | | 0.6076 | 1.5 | 6750 | 0.6230 | 0.6565 | | 0.6007 | 1.56 | 7000 | 0.6233 | 0.6545 | | 0.6452 | 1.61 | 7250 | 0.6205 | 0.6540 | | 0.5932 | 1.67 | 7500 | 0.6207 | 0.6535 | | 0.6093 | 1.72 | 7750 | 0.6207 | 0.6530 | | 0.6183 | 1.78 | 8000 | 0.6206 | 0.6535 | | 0.6244 | 1.84 | 8250 | 0.6200 | 0.6545 | | 0.6183 | 1.89 | 8500 | 0.6199 | 0.6545 | | 0.6281 | 1.95 | 8750 | 0.6198 | 0.6540 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/gemma-2b", "model-index": [{"name": "RM-HH-Gemma_helpful_human_loraR64_20000_gemma2b_shuffleTrue_extractchosenFalse", "results": []}]}
Holarissun/RM-HH-Gemma_helpful_human_loraR64_20000_gemma2b_shuffleTrue_extractchosenFalse
null
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-04-24T14:26:37+00:00
text-generation
transformers
# Opus-Samantha-Llama-3-8B Opus-Samantha-Llama-3-8B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc) Trained on 1xL4 for 1 hour _model is curretly very nsfw. uneven distribution of subjects in dataset. will be back with v2_ ## Process - Original Model: [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) - Datatset: [macadeliccc/opus_samantha](https://huggingface.co/datasets/macadeliccc/opus_samantha) - Learning Rate: 2e-05 - Steps: 2772 - Warmup Steps: 277 - Per Device Train Batch Size: 2 - Gradient Accumulation Steps 1 - Optimizer: paged_adamw_8bit - Max Sequence Length: 4096 - Max Prompt Length: 2048 - Max Length: 2048 ## 💻 Usage ```python !pip install -qU transformers torch import transformers import torch model_id = "macadeliccc/Opus-Samantha-Llama-3-8B" pipeline = transformers.pipeline( pipeline("Hey how are you doing today?") ``` <div align="center"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" height="50" align="center" /> </div>
{"license": "apache-2.0", "datasets": ["macadeliccc/opus_samantha"]}
macadeliccc/Opus-Samantha-Llama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "dataset:macadeliccc/opus_samantha", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T14:26:38+00:00
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
cesaenv/futurama
null
[ "fastai", "has_space", "region:us" ]
null
2024-04-24T14:26:46+00:00
null
transformers
# Uploaded model - **Developed by:** nicolarsen - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
nicolarsen/LLama3-8B-Meoo
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:27:04+00:00
token-classification
transformers
{}
adisur/my_awesome_wnut_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:27:31+00:00
null
null
{}
Seqath/Danny
null
[ "region:us" ]
null
2024-04-24T14:27:59+00:00
null
null
{}
Balaramdas/Balaram
null
[ "region:us" ]
null
2024-04-24T14:28:25+00:00
text-classification
transformers
{}
h1alexbel/results
null
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:28:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-med This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-med", "results": []}]}
hari02/llava-1.5-7b-hf-med
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
2024-04-24T14:28:30+00:00
text-generation
transformers
# Medical-Llama3-8B-4bit: Fine-Tuned Llama3 for Medical Q&A [![](future.jpg)](https://ruslanmv.com/) Medical fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks. It has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based. This repository provides a fine-tuned version of the powerful Llama3 8B model, specifically designed to answer medical questions in an informative way. It leverages the rich knowledge contained in the AI Medical Chatbot dataset ([ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)). **Model & Development** - **Developed by:** ruslanmv - **License:** Apache-2.0 - **Finetuned from model:** meta-llama/Meta-Llama-3-8B **Key Features** - **Medical Focus:** Optimized to address health-related inquiries. - **Knowledge Base:** Trained on a comprehensive medical chatbot dataset. - **Text Generation:** Generates informative and potentially helpful responses. **Installation** This model is accessible through the Hugging Face Transformers library. Install it using pip: ```bash pip install git+https://github.com/huggingface/accelerate.git pip install git+https://github.com/huggingface/transformers.git pip install bitsandbytes ``` **Usage Example** Here's a Python code snippet demonstrating how to interact with the `llama3-8B-medical` model and generate answers to your medical questions: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch # Load tokenizer and model model_id = "ruslanmv/llama3-8B-medical" quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForCausalLM.from_pretrained(model_id, config=quantization_config) def create_prompt(user_query): B_INST, E_INST = "<s>[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = """\ You are an AI Medical Chatbot Assistant, provide comprehensive and informative responses to your inquiries. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""" SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS instruction = f"User asks: {user_query}\n" prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST return prompt.strip() def generate_text(model, tokenizer, prompt, max_length=200, temperature=0.8, num_return_sequences=1): prompt = create_prompt(user_query) # Tokenize the prompt input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) # Move input_ids to the same device as the model # Generate text output = model.generate( input_ids=input_ids, max_length=max_length, temperature=temperature, num_return_sequences=num_return_sequences, pad_token_id=tokenizer.eos_token_id, # Set pad token to end of sequence token do_sample=True ) # Decode the generated output generated_text = tokenizer.decode(output[0], skip_special_tokens=True) # Split the generated text based on the prompt and take the portion after it generated_text = generated_text.split(prompt)[-1].strip() return generated_text # Example usage # - Context: First describe your problem. # - Question: Then make the question. user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?" generated_text = generate_text(model, tokenizer, user_query) print(generated_text) ``` the type of answer is : ``` Yes, it is possible. Hypothyroidism can present symptoms like increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism. I recommend consulting with a healthcare provider. 2. Hypothyroidism can present symptoms like fever, increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism. ``` **Important Note** This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns. **License** This model is distributed under the Apache License 2.0 (see LICENSE file for details). **Contributing** We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request. **Disclaimer** While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. It is crucial to consult a doctor or other healthcare professional for definitive medical advice. ```
{"language": "en", "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "ruslanmv", "llama", "trl"], "datasets": ["ruslanmv/ai-medical-chatbot"], "base_model": "meta-llama/Meta-Llama-3-8B"}
ruslanmv/llama3-8B-medical
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "ruslanmv", "trl", "en", "dataset:ruslanmv/ai-medical-chatbot", "base_model:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-24T14:29:11+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Yi-6B-zhihu7 This model is a fine-tuned version of [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the zhihu dataset. It achieves the following results on the evaluation set: - Loss: 2.5970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6831 | 1.0 | 96 | 2.6346 | | 2.6684 | 2.0 | 192 | 2.6288 | | 2.687 | 3.0 | 288 | 2.6192 | | 2.6597 | 4.0 | 384 | 2.6109 | | 2.6019 | 5.0 | 480 | 2.6054 | | 2.6118 | 6.0 | 576 | 2.6022 | | 2.7286 | 7.0 | 672 | 2.6001 | | 2.6341 | 8.0 | 768 | 2.5987 | | 2.572 | 9.0 | 864 | 2.5979 | | 2.622 | 10.0 | 960 | 2.5974 | | 2.6404 | 11.0 | 1056 | 2.5972 | | 2.6607 | 12.0 | 1152 | 2.5971 | | 2.5324 | 13.0 | 1248 | 2.5971 | | 2.5472 | 14.0 | 1344 | 2.5970 | | 2.539 | 15.0 | 1440 | 2.5970 | | 2.5757 | 16.0 | 1536 | 2.5971 | | 2.6495 | 17.0 | 1632 | 2.5970 | | 2.5647 | 18.0 | 1728 | 2.5970 | | 2.5605 | 19.0 | 1824 | 2.5970 | | 2.6608 | 20.0 | 1920 | 2.5970 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.2.2+cu118 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["zhihu"], "base_model": "01-ai/Yi-6B", "model-index": [{"name": "Yi-6B-zhihu7", "results": []}]}
yyx123/Yi-6B-zhihu7
null
[ "peft", "safetensors", "llama", "alignment-handbook", "generated_from_trainer", "trl", "sft", "dataset:zhihu", "base_model:01-ai/Yi-6B", "license:other", "4-bit", "region:us" ]
null
2024-04-24T14:29:36+00:00