pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bias_identificaiton1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-uncased", "model-index": [{"name": "bias_identificaiton1", "results": []}]} | PriyaPatel/bias_identificaiton1 | null | [
"transformers",
"tf",
"bert",
"feature-extraction",
"generated_from_keras_callback",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:10:24+00:00 | [] | [] | TAGS
#transformers #tf #bert #feature-extraction #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
|
# bias_identificaiton1
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2
| [
"# bias_identificaiton1\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #bert #feature-extraction #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bias_identificaiton1\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # BioMistral-MedMNX
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [johnsnowlabs/JSL-MedMNX-7B](https://huggingface.co/johnsnowlabs/JSL-MedMNX-7B) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B-DARE](https://huggingface.co/BioMistral/BioMistral-7B-DARE)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: johnsnowlabs/JSL-MedMNX-7B
parameters:
density: 0.53
weight: 0.4
- model: BioMistral/BioMistral-7B-DARE
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
tokenizer_source: union
base_model: johnsnowlabs/JSL-MedMNX-7B
parameters:
int8_mask: true
dtype: bfloat16
```
| {"license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["johnsnowlabs/JSL-MedMNX-7B", "BioMistral/BioMistral-7B-DARE"]} | BioMistral/BioMistral-MedMNX | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:johnsnowlabs/JSL-MedMNX-7B",
"base_model:BioMistral/BioMistral-7B-DARE",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:10:46+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-johnsnowlabs/JSL-MedMNX-7B #base_model-BioMistral/BioMistral-7B-DARE #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # BioMistral-MedMNX
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using johnsnowlabs/JSL-MedMNX-7B as a base.
### Models Merged
The following models were included in the merge:
* BioMistral/BioMistral-7B-DARE
### Configuration
The following YAML configuration was used to produce this model:
| [
"# BioMistral-MedMNX\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using johnsnowlabs/JSL-MedMNX-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* BioMistral/BioMistral-7B-DARE",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-johnsnowlabs/JSL-MedMNX-7B #base_model-BioMistral/BioMistral-7B-DARE #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# BioMistral-MedMNX\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using johnsnowlabs/JSL-MedMNX-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* BioMistral/BioMistral-7B-DARE",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# yujiepan/llama-3-tiny-random-gptq-w4
4-bit weight only quantization by AutoGPTQ on [yujiepan/llama-3-tiny-random](https://huggingface.co/yujiepan/llama-3-tiny-random)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
import torch
model_id = "yujiepan/llama-3-tiny-random"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quantization_config = GPTQConfig(
bits=4, group_size=-1,
dataset="c4",
tokenizer=tokenizer,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
quantization_config=quantization_config,
)
```
| {"library_name": "transformers", "tags": []} | yujiepan/llama-3-tiny-random-gptq-w4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:13:25+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# yujiepan/llama-3-tiny-random-gptq-w4
4-bit weight only quantization by AutoGPTQ on yujiepan/llama-3-tiny-random
| [
"# yujiepan/llama-3-tiny-random-gptq-w4\n\n4-bit weight only quantization by AutoGPTQ on yujiepan/llama-3-tiny-random"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# yujiepan/llama-3-tiny-random-gptq-w4\n\n4-bit weight only quantization by AutoGPTQ on yujiepan/llama-3-tiny-random"
] |
text-classification | transformers | ## Metrics
- loss: 0.9470
- accuracy: 0.8366
- precision: 0.8360
- recall: 0.8366
- precision_macro: 0.8141
- recall_macro: 0.7899
- macro_fpr: 0.0143
- weighted_fpr: 0.0138
- weighted_specificity: 0.9781
- macro_specificity: 0.9876
- weighted_sensitivity: 0.8366
- macro_sensitivity: 0.7899
- f1_micro: 0.8366
- f1_macro: 0.7978
- f1_weighted: 0.8350
- runtime: 21.6449
- samples_per_second: 59.6450
- steps_per_second: 7.4840
# InLegal-legal-merge-ties-d-0-InLegal-w-1
InLegal-legal-merge-ties-d-0-InLegal-w-1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [xshubhamx/InLegalBERT](https://huggingface.co/xshubhamx/InLegalBERT)
* [xshubhamx/legal-bert-base-uncased](https://huggingface.co/xshubhamx/legal-bert-base-uncased)
## 🧩 Configuration
```yaml
models:
- model: xshubhamx/InLegalBERT
parameters:
density: 0.53
weight: 0
- model: xshubhamx/legal-bert-base-uncased
parameters:
density: 0.53
weight: 1
merge_method: ties
base_model: xshubhamx/InLegalBERT
parameters:
normalize: false
int8_mask: true
dtype: float16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "xshubhamx/InLegalBERT", "xshubhamx/legal-bert-base-uncased"]} | xshubhamx/InLegal-legal-merge-ties-d-0-InLegal-w-1 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"merge",
"mergekit",
"lazymergekit",
"xshubhamx/InLegalBERT",
"xshubhamx/legal-bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:14:08+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## Metrics
- loss: 0.9470
- accuracy: 0.8366
- precision: 0.8360
- recall: 0.8366
- precision_macro: 0.8141
- recall_macro: 0.7899
- macro_fpr: 0.0143
- weighted_fpr: 0.0138
- weighted_specificity: 0.9781
- macro_specificity: 0.9876
- weighted_sensitivity: 0.8366
- macro_sensitivity: 0.7899
- f1_micro: 0.8366
- f1_macro: 0.7978
- f1_weighted: 0.8350
- runtime: 21.6449
- samples_per_second: 59.6450
- steps_per_second: 7.4840
# InLegal-legal-merge-ties-d-0-InLegal-w-1
InLegal-legal-merge-ties-d-0-InLegal-w-1 is a merge of the following models using mergekit:
* xshubhamx/InLegalBERT
* xshubhamx/legal-bert-base-uncased
## Configuration
| [
"## Metrics\n\n- loss: 0.9470\n- accuracy: 0.8366\n- precision: 0.8360\n- recall: 0.8366\n- precision_macro: 0.8141\n- recall_macro: 0.7899\n- macro_fpr: 0.0143\n- weighted_fpr: 0.0138\n- weighted_specificity: 0.9781\n- macro_specificity: 0.9876\n- weighted_sensitivity: 0.8366\n- macro_sensitivity: 0.7899\n- f1_micro: 0.8366\n- f1_macro: 0.7978\n- f1_weighted: 0.8350\n- runtime: 21.6449\n- samples_per_second: 59.6450\n- steps_per_second: 7.4840",
"# InLegal-legal-merge-ties-d-0-InLegal-w-1\n\nInLegal-legal-merge-ties-d-0-InLegal-w-1 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Metrics\n\n- loss: 0.9470\n- accuracy: 0.8366\n- precision: 0.8360\n- recall: 0.8366\n- precision_macro: 0.8141\n- recall_macro: 0.7899\n- macro_fpr: 0.0143\n- weighted_fpr: 0.0138\n- weighted_specificity: 0.9781\n- macro_specificity: 0.9876\n- weighted_sensitivity: 0.8366\n- macro_sensitivity: 0.7899\n- f1_micro: 0.8366\n- f1_macro: 0.7978\n- f1_weighted: 0.8350\n- runtime: 21.6449\n- samples_per_second: 59.6450\n- steps_per_second: 7.4840",
"# InLegal-legal-merge-ties-d-0-InLegal-w-1\n\nInLegal-legal-merge-ties-d-0-InLegal-w-1 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** reallad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | reallad/blopsy-1.3 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:15:15+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: reallad
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bias_identificaiton0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "bias_identificaiton0", "results": []}]} | PriyaPatel/bias_identificaiton0 | null | [
"transformers",
"tf",
"bert",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:15:25+00:00 | [] | [] | TAGS
#transformers #tf #bert #feature-extraction #generated_from_keras_callback #endpoints_compatible #region-us
|
# bias_identificaiton0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2
| [
"# bias_identificaiton0\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #bert #feature-extraction #generated_from_keras_callback #endpoints_compatible #region-us \n",
"# bias_identificaiton0\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DialoGPT-small - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/DialoGPT-small/
Original model description:
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__DialoGPT-small)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.02 |
| ARC (25-shot) | 25.77 |
| HellaSwag (10-shot) | 25.79 |
| MMLU (5-shot) | 25.81 |
| TruthfulQA (0-shot) | 47.49 |
| Winogrande (5-shot) | 50.28 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.0 |
| {} | RichardErkhov/microsoft_-_DialoGPT-small-4bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:16:42+00:00 | [
"1911.00536"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
DialoGPT-small - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
thumbnail: URL
tags:
* conversational
license: mit
---
A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
------------------------------------------------------------------------------
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
Please find the information about preprocessing, training and full details of the DialoGPT in the original DialoGPT repository
ArXiv paper: URL
### How to use
Now we are ready to try out how the model works as a chatting partner!
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DialoGPT-small - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/DialoGPT-small/
Original model description:
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__DialoGPT-small)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.02 |
| ARC (25-shot) | 25.77 |
| HellaSwag (10-shot) | 25.79 |
| MMLU (5-shot) | 25.81 |
| TruthfulQA (0-shot) | 47.49 |
| Winogrande (5-shot) | 50.28 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.0 |
| {} | RichardErkhov/microsoft_-_DialoGPT-small-8bits | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T17:17:10+00:00 | [
"1911.00536"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
DialoGPT-small - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
thumbnail: URL
tags:
* conversational
license: mit
---
A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
------------------------------------------------------------------------------
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
Please find the information about preprocessing, training and full details of the DialoGPT in the original DialoGPT repository
ArXiv paper: URL
### How to use
Now we are ready to try out how the model works as a chatting partner!
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #conversational #arxiv-1911.00536 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0", "library_name": "transformers", "datasets": ["AIGym/function_calling_v2"]} | AIGym/Llama-2-7b-chat-hf-structured-responses-SFT | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:AIGym/function_calling_v2",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:19:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #dataset-AIGym/function_calling_v2 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-AIGym/function_calling_v2 #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_3_pretrain_adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset_condensed](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_3_pretrain_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset_condensed"]} | BigTMiami/micro_par_bn_v_3_pretrain_adapter | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset_condensed",
"region:us"
] | null | 2024-04-20T17:21:16+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset_condensed #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_3_pretrain_adapter' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_3_pretrain_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset_condensed #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_3_pretrain_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1_5 - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1_5/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The language model Phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune Phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## How to Use
Phi-1.5 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463. Please cite as:
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1_5-4bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:2309.05463",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:24:40+00:00 | [
"2309.05463"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-2309.05463 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1_5 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## How to Use
Phi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
## Intended Uses
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
where the model generates the text after "Answer:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
You can find the paper at URL Please cite as:
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nThe language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.\n\nWe did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.\n\nFor a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.\n\nPhi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.",
"## How to Use\n\nPhi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.",
"### QA Format:\n\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1.5\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.\n\nYou can find the paper at URL Please cite as:",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-2309.05463 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## Model Summary\n\nThe language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.\n\nWe did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.\n\nFor a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.\n\nPhi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.",
"## How to Use\n\nPhi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.",
"### QA Format:\n\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1.5\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.\n\nYou can find the paper at URL Please cite as:",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1 - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## How to Use
Phi-1 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
for num in range(2, n+1):
for i in range(2, num):
if num % i == 0:
break
else:
print(num)
```
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
**Notes:**
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE).
### Citation
```bib
@article{gunasekar2023textbooks,
title={Textbooks Are All You Need},
author={Gunasekar, Suriya and Zhang, Yi and Aneja, Jyoti and Mendes, Caio C{\'e}sar Teodoro and Del Giorno, Allie and Gopi, Sivakanth and Javaheripi, Mojan and Kauffmann, Piero and de Rosa, Gustavo and Saarikivi, Olli and others},
journal={arXiv preprint arXiv:2306.11644},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1-4bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:25:24+00:00 | [] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## How to Use
Phi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
## Intended Uses
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
Notes:
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nThe language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.",
"## How to Use\n\nPhi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1 is best suited for prompts using the code format:",
"### Code Format:\n\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\nNotes:\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.",
"## Warning about Security Risks\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Training tokens: 54B tokens (7B unique tokens)\n\n* Precision: fp16\n\n* GPUs: 8 A100\n\n* Training time: 6 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## Model Summary\n\nThe language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.",
"## How to Use\n\nPhi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1 is best suited for prompts using the code format:",
"### Code Format:\n\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\nNotes:\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.",
"## Warning about Security Risks\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Training tokens: 54B tokens (7B unique tokens)\n\n* Precision: fp16\n\n* GPUs: 8 A100\n\n* Training time: 6 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_4_pretrain_adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset_condensed](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_4_pretrain_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset_condensed"]} | BigTMiami/micro_par_bn_v_4_pretrain_adapter | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset_condensed",
"region:us"
] | null | 2024-04-20T17:25:51+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset_condensed #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_4_pretrain_adapter' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_4_pretrain_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset_condensed #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_4_pretrain_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers | # Llama-3-Orca-1.0-8B-GGUF
Quant of [https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B)
- f32
- f16
- Q8_0
- Q4_K_M
- Q2_K | {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["gguf"], "license_name": "llama-3-license", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE", "pipeline_tag": "text-generation"} | leafspark/Llama-3-Orca-1.0-8B-GGUF | null | [
"transformers",
"gguf",
"text-generation",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:26:17+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text-generation #en #license-other #endpoints_compatible #region-us
| # Llama-3-Orca-1.0-8B-GGUF
Quant of URL
- f32
- f16
- Q8_0
- Q4_K_M
- Q2_K | [
"# Llama-3-Orca-1.0-8B-GGUF\n\nQuant of URL\n\n- f32\n- f16\n- Q8_0\n- Q4_K_M\n- Q2_K"
] | [
"TAGS\n#transformers #gguf #text-generation #en #license-other #endpoints_compatible #region-us \n",
"# Llama-3-Orca-1.0-8B-GGUF\n\nQuant of URL\n\n- f32\n- f16\n- Q8_0\n- Q4_K_M\n- Q2_K"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | b2bp8ip/tinyllama_mbpp_ft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:26:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T17:27:17+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1_5 - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1_5/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The language model Phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune Phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## How to Use
Phi-1.5 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463. Please cite as:
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1_5-8bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:2309.05463",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T17:27:22+00:00 | [
"2309.05463"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-2309.05463 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1_5 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## How to Use
Phi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
## Intended Uses
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
where the model generates the text after "Answer:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
You can find the paper at URL Please cite as:
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nThe language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.\n\nWe did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.\n\nFor a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.\n\nPhi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.",
"## How to Use\n\nPhi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.",
"### QA Format:\n\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1.5\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.\n\nYou can find the paper at URL Please cite as:",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-2309.05463 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Model Summary\n\nThe language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.\n\nWe did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.\n\nFor a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.\n\nPhi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.",
"## How to Use\n\nPhi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.",
"### QA Format:\n\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1.5\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.\n\nYou can find the paper at URL Please cite as:",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1 - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## How to Use
Phi-1 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
for num in range(2, n+1):
for i in range(2, num):
if num % i == 0:
break
else:
print(num)
```
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
**Notes:**
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE).
### Citation
```bib
@article{gunasekar2023textbooks,
title={Textbooks Are All You Need},
author={Gunasekar, Suriya and Zhang, Yi and Aneja, Jyoti and Mendes, Caio C{\'e}sar Teodoro and Del Giorno, Allie and Gopi, Sivakanth and Javaheripi, Mojan and Kauffmann, Piero and de Rosa, Gustavo and Saarikivi, Olli and others},
journal={arXiv preprint arXiv:2306.11644},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1-8bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T17:27:44+00:00 | [] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## How to Use
Phi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
## Intended Uses
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
Notes:
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nThe language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.",
"## How to Use\n\nPhi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1 is best suited for prompts using the code format:",
"### Code Format:\n\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\nNotes:\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.",
"## Warning about Security Risks\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Training tokens: 54B tokens (7B unique tokens)\n\n* Precision: fp16\n\n* GPUs: 8 A100\n\n* Training time: 6 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Model Summary\n\nThe language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.",
"## How to Use\n\nPhi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.",
"## Intended Uses\n\nGiven the nature of the training data, Phi-1 is best suited for prompts using the code format:",
"### Code Format:\n\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\nNotes:\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-1\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.",
"## Warning about Security Risks\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Training tokens: 54B tokens (7B unique tokens)\n\n* Precision: fp16\n\n* GPUs: 8 A100\n\n* Training time: 6 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
null | transformers |
# Uploaded model
- **Developed by:** indiehackers
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-7b-it-bnb-4bit"} | indiehackers/gemma7b-telugu-instruct | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:28:45+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: indiehackers
- License: apache-2.0
- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: indiehackers\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-7b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: indiehackers\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver24
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver24", "results": []}]} | comet24082002/finetuned_bge_ver24 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:29:16+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver24
This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# finetuned_bge_ver24\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.3\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver24\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.3\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-2 - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-2/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 has been integrated in the `transformers` version 4.37.0, please ensure that you are using a version equal or higher than it.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the [PhiAttention.forward()](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/modeling_phi.py#L306) function.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-2-4bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:29:45+00:00 | [] | [] | TAGS
#transformers #safetensors #phi #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-2 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
where the model generates the text after "Output:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nPhi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.\n\nOur model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.",
"## How to Use\n\nPhi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.\n\nPhi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.",
"## Intended Uses\n\nGiven the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.",
"### QA Format:\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" . \nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: \\<prompt\\>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-2\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Context length: 2048 tokens\n\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n\n* Training tokens: 1.4T tokens\n\n* GPUs: 96xA100-80G\n\n* Training time: 14 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## Model Summary\n\nPhi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.\n\nOur model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.",
"## How to Use\n\nPhi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.\n\nPhi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.",
"## Intended Uses\n\nGiven the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.",
"### QA Format:\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" . \nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: \\<prompt\\>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-2\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Context length: 2048 tokens\n\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n\n* Training tokens: 1.4T tokens\n\n* GPUs: 96xA100-80G\n\n* Training time: 14 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MODEL_EPOCHS2_D
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-hf", "model-index": [{"name": "MODEL_EPOCHS2_D", "results": []}]} | LLMLover/MODEL_EPOCHS2_D | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-20T17:29:48+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us
|
# MODEL_EPOCHS2_D
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| [
"# MODEL_EPOCHS2_D\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us \n",
"# MODEL_EPOCHS2_D\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-1b-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-1b-v0.1/
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-1b-v0.1-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2404.07965",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:29:49+00:00 | [
"2404.07965"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-1b-v0.1 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | yuhuixu/mistral-bias-0.85 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:30:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-1.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q2_K.gguf) | Q2_K | 0.54GB |
| [phi-1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.IQ3_XS.gguf) | IQ3_XS | 0.59GB |
| [phi-1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.IQ3_S.gguf) | IQ3_S | 0.61GB |
| [phi-1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q3_K_S.gguf) | Q3_K_S | 0.61GB |
| [phi-1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.IQ3_M.gguf) | IQ3_M | 0.64GB |
| [phi-1.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q3_K.gguf) | Q3_K | 0.69GB |
| [phi-1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q3_K_M.gguf) | Q3_K_M | 0.69GB |
| [phi-1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q3_K_L.gguf) | Q3_K_L | 0.75GB |
| [phi-1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.IQ4_XS.gguf) | IQ4_XS | 0.74GB |
| [phi-1.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q4_0.gguf) | Q4_0 | 0.77GB |
| [phi-1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.IQ4_NL.gguf) | IQ4_NL | 0.78GB |
| [phi-1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q4_K_S.gguf) | Q4_K_S | 0.78GB |
| [phi-1.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q4_K.gguf) | Q4_K | 0.83GB |
| [phi-1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q4_K_M.gguf) | Q4_K_M | 0.83GB |
| [phi-1.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q4_1.gguf) | Q4_1 | 0.85GB |
| [phi-1.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q5_0.gguf) | Q5_0 | 0.92GB |
| [phi-1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q5_K_S.gguf) | Q5_K_S | 0.92GB |
| [phi-1.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q5_K.gguf) | Q5_K | 0.96GB |
| [phi-1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q5_K_M.gguf) | Q5_K_M | 0.96GB |
| [phi-1.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q5_1.gguf) | Q5_1 | 1.0GB |
| [phi-1.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1-gguf/blob/main/phi-1.Q6_K.gguf) | Q6_K | 1.09GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- code
---
## Model Summary
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
## How to Use
Phi-1 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
for num in range(2, n+1):
for i in range(2, num):
if num % i == 0:
break
else:
print(num)
```
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
**Notes:**
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
## Warning about Security Risks
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1/resolve/main/LICENSE).
### Citation
```bib
@article{gunasekar2023textbooks,
title={Textbooks Are All You Need},
author={Gunasekar, Suriya and Zhang, Yi and Aneja, Jyoti and Mendes, Caio C{\'e}sar Teodoro and Del Giorno, Allie and Gopi, Sivakanth and Javaheripi, Mojan and Kauffmann, Piero and de Rosa, Gustavo and Saarikivi, Olli and others},
journal={arXiv preprint arXiv:2306.11644},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1-gguf | null | [
"gguf",
"region:us"
] | null | 2024-04-20T17:31:29+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1 - GGUF
* Model creator: URL
* Original model: URL
Name: phi-1.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.54GB
Name: phi-1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.59GB
Name: phi-1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.61GB
Name: phi-1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.61GB
Name: phi-1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.64GB
Name: phi-1.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.69GB
Name: phi-1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.69GB
Name: phi-1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.75GB
Name: phi-1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.74GB
Name: phi-1.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.77GB
Name: phi-1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.78GB
Name: phi-1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.78GB
Name: phi-1.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.83GB
Name: phi-1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.83GB
Name: phi-1.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.85GB
Name: phi-1.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.92GB
Name: phi-1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.92GB
Name: phi-1.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.96GB
Name: phi-1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.96GB
Name: phi-1.Q5\_1.gguf, Quant method: Q5\_1, Size: 1.0GB
Name: phi-1.Q6\_K.gguf, Quant method: Q6\_K, Size: 1.09GB
Original model description:
---------------------------
license: mit
license\_link: URL
language:
* en
pipeline\_tag: text-generation
tags:
* code
---
Model Summary
-------------
The language model Phi-1 is a Transformer with 1.3 billion parameters, specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from The Stack v1.2, Q&A content from StackOverflow, competition code from code\_contests, and synthetic Python textbooks and exercises generated by gpt-3.5-turbo-0301. Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), Phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
How to Use
----------
Phi-1 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Intended Uses
-------------
Given the nature of the training data, Phi-1 is best suited for prompts using the code format:
### Code Format:
where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
Notes:
* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust\_remote\_code=True' to prevent side-effects.
Sample Code
-----------
Limitations of Phi-1
--------------------
* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
Warning about Security Risks
----------------------------
When leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.
* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.
* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.
* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).
* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.
* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.
Given these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.
Training
--------
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Training tokens: 54B tokens (7B unique tokens)
* Precision: fp16
* GPUs: 8 A100
* Training time: 6 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### Code Format:\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\n\nNotes:\n\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-1\n--------------------\n\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.\n\n\nWarning about Security Risks\n----------------------------\n\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Training tokens: 54B tokens (7B unique tokens)\n* Precision: fp16\n* GPUs: 8 A100\n* Training time: 6 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#gguf #region-us \n",
"### Code Format:\n\n\nwhere the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)\n\n\nNotes:\n\n\n* Phi-1 is intended for code purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.\n* Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-1\n--------------------\n\n\n* Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.\n* Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.\n* Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.\n* Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.\n* Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.\n* Potential Biases: Phi-1, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.\n\n\nWarning about Security Risks\n----------------------------\n\n\nWhen leveraging Phi-1, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:\n\n\n* Directory Traversal: The code might fail to implement safe checks against directory traversal attacks, potentially allowing unauthorized access to sensitive files on your system.\n* Injection Attacks: There could be lapses in escaping strings properly, making the application susceptible to SQL, OS commands, or other injection attacks.\n* Misunderstanding Requirements: The model might sometimes misunderstand or oversimplify user requirements, leading to incomplete or insecure solutions.\n* Lack of Input Validation: In some cases, the model might neglect to incorporate input validation or sanitize user inputs, opening doors to attacks like Cross-Site Scripting (XSS).\n* Insecure Defaults: The model might recommend or generate code with insecure default settings, such as weak password requirements or unencrypted data transmissions.\n* Failure in Error Handling: Improper error handling can inadvertently reveal sensitive information about the system or the application's internal workings.\n\n\nGiven these potential pitfalls, and others not explicitly mentioned, it's essential to thoroughly review, test, and verify the generated code before deploying it in any application, especially those that are security-sensitive. Always consult with security experts or perform rigorous penetration testing when in doubt.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Training tokens: 54B tokens (7B unique tokens)\n* Precision: fp16\n* GPUs: 8 A100\n* Training time: 6 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-1_5 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-1_5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-1_5.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q2_K.gguf) | Q2_K | 0.54GB |
| [phi-1_5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.IQ3_XS.gguf) | IQ3_XS | 0.59GB |
| [phi-1_5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.IQ3_S.gguf) | IQ3_S | 0.61GB |
| [phi-1_5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q3_K_S.gguf) | Q3_K_S | 0.61GB |
| [phi-1_5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.IQ3_M.gguf) | IQ3_M | 0.64GB |
| [phi-1_5.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q3_K.gguf) | Q3_K | 0.69GB |
| [phi-1_5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q3_K_M.gguf) | Q3_K_M | 0.69GB |
| [phi-1_5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q3_K_L.gguf) | Q3_K_L | 0.75GB |
| [phi-1_5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.IQ4_XS.gguf) | IQ4_XS | 0.74GB |
| [phi-1_5.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q4_0.gguf) | Q4_0 | 0.77GB |
| [phi-1_5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.IQ4_NL.gguf) | IQ4_NL | 0.78GB |
| [phi-1_5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q4_K_S.gguf) | Q4_K_S | 0.78GB |
| [phi-1_5.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q4_K.gguf) | Q4_K | 0.83GB |
| [phi-1_5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q4_K_M.gguf) | Q4_K_M | 0.83GB |
| [phi-1_5.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q4_1.gguf) | Q4_1 | 0.85GB |
| [phi-1_5.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q5_0.gguf) | Q5_0 | 0.92GB |
| [phi-1_5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q5_K_S.gguf) | Q5_K_S | 0.92GB |
| [phi-1_5.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q5_K.gguf) | Q5_K | 0.96GB |
| [phi-1_5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q5_K_M.gguf) | Q5_K_M | 0.96GB |
| [phi-1_5.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q5_1.gguf) | Q5_1 | 1.0GB |
| [phi-1_5.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-1_5-gguf/blob/main/phi-1_5.Q6_K.gguf) | Q6_K | 1.09GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The language model Phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune Phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## How to Use
Phi-1.5 has been integrated in the `transformers` version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-1_5/resolve/main/LICENSE).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463. Please cite as:
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-1_5-gguf | null | [
"gguf",
"arxiv:2309.05463",
"region:us"
] | null | 2024-04-20T17:31:50+00:00 | [
"2309.05463"
] | [] | TAGS
#gguf #arxiv-2309.05463 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-1\_5 - GGUF
* Model creator: URL
* Original model: URL
Name: phi-1\_5.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.54GB
Name: phi-1\_5.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.59GB
Name: phi-1\_5.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.61GB
Name: phi-1\_5.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.61GB
Name: phi-1\_5.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.64GB
Name: phi-1\_5.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.69GB
Name: phi-1\_5.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.69GB
Name: phi-1\_5.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.75GB
Name: phi-1\_5.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.74GB
Name: phi-1\_5.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.77GB
Name: phi-1\_5.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.78GB
Name: phi-1\_5.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.78GB
Name: phi-1\_5.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.83GB
Name: phi-1\_5.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.83GB
Name: phi-1\_5.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.85GB
Name: phi-1\_5.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.92GB
Name: phi-1\_5.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.92GB
Name: phi-1\_5.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.96GB
Name: phi-1\_5.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.96GB
Name: phi-1\_5.Q5\_1.gguf, Quant method: Q5\_1, Size: 1.0GB
Name: phi-1\_5.Q6\_K.gguf, Quant method: Q6\_K, Size: 1.09GB
Original model description:
---------------------------
license: mit
license\_link: URL
language:
* en
pipeline\_tag: text-generation
tags:
* nlp
* code
---
Model Summary
-------------
The language model Phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We did not fine-tune Phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
Phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
How to Use
----------
Phi-1.5 has been integrated in the 'transformers' version 4.37.0. If you are using a lower version, ensure that you are doing the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Intended Uses
-------------
Given the nature of the training data, Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that Phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
### QA Format:
where the model generates the text after "Answer:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust\_remote\_code=True' to prevent side-effects.
Sample Code
-----------
Limitations of Phi-1.5
----------------------
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
Training
--------
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
You can find the paper at URL Please cite as:
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### QA Format:\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\nwhere the model generates the text after the comments.\n\n\nNotes:\n\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-1.5\n----------------------\n\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nYou can find the paper at URL Please cite as:\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#gguf #arxiv-2309.05463 #region-us \n",
"### QA Format:\n\n\nwhere the model generates the text after \"Answer:\".",
"### Chat Format:\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\nwhere the model generates the text after the comments.\n\n\nNotes:\n\n\n* Phi-1.5-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n* Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-1.5\n----------------------\n\n\n* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Dataset size: 30B tokens\n* Training tokens: 150B tokens\n* Precision: fp16\n* GPUs: 32xA100-40G\n* Training time: 8 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nYou can find the paper at URL Please cite as:\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stories_model_bert_v2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4479
- Rouge1: 0.1642
- Rouge2: 0.0794
- Rougel: 0.1413
- Rougelsum: 0.141
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 45 | 2.6304 | 0.1398 | 0.0686 | 0.1234 | 0.1231 | 20.0 |
| No log | 2.0 | 90 | 2.4761 | 0.1467 | 0.0724 | 0.1245 | 0.1253 | 20.0 |
| No log | 3.0 | 135 | 2.4479 | 0.1642 | 0.0794 | 0.1413 | 0.141 | 20.0 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large", "model-index": [{"name": "stories_model_bert_v2", "results": []}]} | elzinga/stories_model_bert_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:31:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| stories\_model\_bert\_v2
========================
This model is a fine-tuned version of facebook/bart-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4479
* Rouge1: 0.1642
* Rouge2: 0.0794
* Rougel: 0.1413
* Rougelsum: 0.141
* Gen Len: 20.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-1b-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-1b-v0.1/
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-1b-v0.1-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2404.07965",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T17:31:55+00:00 | [
"2404.07965"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-1b-v0.1 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-1b-v0.1 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-1b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [rho-math-1b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [rho-math-1b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [rho-math-1b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [rho-math-1b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [rho-math-1b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [rho-math-1b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [rho-math-1b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [rho-math-1b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [rho-math-1b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [rho-math-1b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [rho-math-1b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [rho-math-1b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [rho-math-1b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [rho-math-1b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [rho-math-1b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [rho-math-1b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [rho-math-1b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [rho-math-1b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [rho-math-1b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [rho-math-1b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [rho-math-1b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf/blob/main/rho-math-1b-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-1b-v0.1-gguf | null | [
"gguf",
"arxiv:2404.07965",
"region:us"
] | null | 2024-04-20T17:34:00+00:00 | [
"2404.07965"
] | [] | TAGS
#gguf #arxiv-2404.07965 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-1b-v0.1 - GGUF
* Model creator: URL
* Original model: URL
Name: rho-math-1b-v0.1.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.4GB
Name: rho-math-1b-v0.1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.44GB
Name: rho-math-1b-v0.1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.47GB
Name: rho-math-1b-v0.1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.47GB
Name: rho-math-1b-v0.1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.48GB
Name: rho-math-1b-v0.1.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.51GB
Name: rho-math-1b-v0.1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.51GB
Name: rho-math-1b-v0.1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.55GB
Name: rho-math-1b-v0.1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.57GB
Name: rho-math-1b-v0.1.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.59GB
Name: rho-math-1b-v0.1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.6GB
Name: rho-math-1b-v0.1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.6GB
Name: rho-math-1b-v0.1.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.62GB
Name: rho-math-1b-v0.1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.62GB
Name: rho-math-1b-v0.1.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.65GB
Name: rho-math-1b-v0.1.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.71GB
Name: rho-math-1b-v0.1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.71GB
Name: rho-math-1b-v0.1.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.73GB
Name: rho-math-1b-v0.1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.73GB
Name: rho-math-1b-v0.1.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.77GB
Name: rho-math-1b-v0.1.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.84GB
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#gguf #arxiv-2404.07965 #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-stable6 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:35:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-2 - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-2/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 has been integrated in the `transformers` version 4.37.0, please ensure that you are using a version equal or higher than it.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the [PhiAttention.forward()](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/modeling_phi.py#L306) function.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-2-8bits | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T17:35:36+00:00 | [] | [] | TAGS
#transformers #safetensors #phi #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-2 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
license_link: URL
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
where the model generates the text after "Output:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.
## Sample Code
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nPhi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.\n\nOur model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.",
"## How to Use\n\nPhi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.\n\nPhi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.",
"## Intended Uses\n\nGiven the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.",
"### QA Format:\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" . \nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: \\<prompt\\>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-2\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Context length: 2048 tokens\n\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n\n* Training tokens: 1.4T tokens\n\n* GPUs: 96xA100-80G\n\n* Training time: 14 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Model Summary\n\nPhi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.\n\nOur model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.",
"## How to Use\n\nPhi-2 has been integrated in the 'transformers' version 4.37.0, please ensure that you are using a version equal or higher than it.\n\nPhi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.",
"## Intended Uses\n\nGiven the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.",
"### QA Format:\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" . \nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: \\<prompt\\>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\n\nwhere the model generates the text after the comments.\n\nNotes:\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n\n* If you are using 'transformers<4.37.0', always load the model with 'trust_remote_code=True' to prevent side-effects.",
"## Sample Code",
"## Limitations of Phi-2\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.",
"## Training",
"### Model\n\n* Architecture: a Transformer-based model with next-word prediction objective\n\n* Context length: 2048 tokens\n\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n\n* Training tokens: 1.4T tokens\n\n* GPUs: 96xA100-80G\n\n* Training time: 14 days",
"### Software\n\n* PyTorch\n\n* DeepSpeed\n\n* Flash-Attention",
"### License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2294 | 0.1 | 250 | 1.5375 |
| 1.4499 | 0.2 | 500 | 1.3111 |
| 1.2584 | 0.3 | 750 | 1.1323 |
| 1.1306 | 0.4 | 1000 | 1.0789 |
| 1.1112 | 0.5 | 1250 | 1.0349 |
| 1.0478 | 0.6 | 1500 | 0.9710 |
| 0.9709 | 0.7 | 1750 | 0.9510 |
| 0.96 | 0.8 | 2000 | 0.9061 |
| 0.9417 | 0.9 | 2250 | 0.8761 |
| 0.939 | 1.0 | 2500 | 0.8684 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]} | cheesebird/detr | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:36:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
| detr
====
This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8684
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T17:37:43+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_1' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null | peft | LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
Dataset used is [a modified](https://huggingface.co/datasets/mpasila/LimaRP-augmented-8k-context) version of [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented).
### Prompt format: ChatML
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "not-for-all-audiences"], "datasets": ["grimulkan/LimaRP-augmented", "mpasila/LimaRP-augmented-8k-context"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | mpasila/Llama-3-LimaRP-LoRA-8B | null | [
"peft",
"safetensors",
"text-generation-inference",
"transformers",
"unsloth",
"llama",
"trl",
"not-for-all-audiences",
"en",
"dataset:grimulkan/LimaRP-augmented",
"dataset:mpasila/LimaRP-augmented-8k-context",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-04-20T17:38:33+00:00 | [] | [
"en"
] | TAGS
#peft #safetensors #text-generation-inference #transformers #unsloth #llama #trl #not-for-all-audiences #en #dataset-grimulkan/LimaRP-augmented #dataset-mpasila/LimaRP-augmented-8k-context #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #region-us
| LoRA trained in 4-bit with 8k context using meta-llama/Meta-Llama-3-8B as the base model for 1 epoch.
Dataset used is a modified version of grimulkan/LimaRP-augmented.
### Prompt format: ChatML
# Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#peft #safetensors #text-generation-inference #transformers #unsloth #llama #trl #not-for-all-audiences #en #dataset-grimulkan/LimaRP-augmented #dataset-mpasila/LimaRP-augmented-8k-context #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #region-us \n",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | b2bp8ip/llama_mbpp_synthetic | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:39:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | This is a merge of [mpasila/Llama-3-LimaRP-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-LoRA-8B).
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
Dataset used is [a modified](https://huggingface.co/datasets/mpasila/LimaRP-augmented-8k-context) version of [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented).
### Prompt format: ChatML
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "not-for-all-audiences"], "datasets": ["grimulkan/LimaRP-augmented", "mpasila/LimaRP-augmented-8k-context"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | mpasila/Llama-3-LimaRP-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"not-for-all-audiences",
"conversational",
"en",
"dataset:grimulkan/LimaRP-augmented",
"dataset:mpasila/LimaRP-augmented-8k-context",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:39:57+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-grimulkan/LimaRP-augmented #dataset-mpasila/LimaRP-augmented-8k-context #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This is a merge of mpasila/Llama-3-LimaRP-LoRA-8B.
LoRA trained in 4-bit with 8k context using meta-llama/Meta-Llama-3-8B as the base model for 1 epoch.
Dataset used is a modified version of grimulkan/LimaRP-augmented.
### Prompt format: ChatML
# Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-grimulkan/LimaRP-augmented #dataset-mpasila/LimaRP-augmented-8k-context #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | # Llama-3-DARE-v2-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
merge_method: dare_ties
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density:
- filter: embed_token
value: 0.1312973174759711
- filter: model.norm
value: 0.9822573068643683
- filter: lm_head
value: 0.9822573068643683
- filter: layers.0.
value: 0.7585980531146381
- filter: layers.1.
value: 0.561884676687423
- filter: layers.2.
value: 0.9813332103529979
- filter: layers.3.
value: 0.21930890320643737
- filter: layers.4.
value: 0.8438595439416953
- filter: layers.5.
value: 0.7505829508553786
- filter: layers.6.
value: 0.25806652766624183
- filter: layers.7.
value: 0.09440963517101368
- filter: layers.8.
value: 0.7755855137764502
- filter: layers.9.
value: 0.980153628060705
- filter: layers.10.
value: 0.19005899180649255
- filter: layers.11.
value: 0.8607970083829785
- filter: layers.12.
value: 0.397155695730673
- filter: layers.13.
value: 0.13006504372382097
- filter: layers.14.
value: 0.09190339054787032
- filter: layers.15.
value: 0.14507208971010788
- filter: layers.16.
value: 0.8783567570968138
- filter: layers.17.
value: 0.12105889524706802
- filter: layers.18.
value: 0.8613122534393256
- filter: layers.19.
value: 0.04810799757747608
- filter: layers.20.
value: 0.5576780215726541
- filter: layers.21.
value: 0.027665563208104402
- filter: layers.22.
value: 0.7853866990645277
- filter: layers.23.
value: 0.8674754718436442
- filter: layers.24.
value: 0.915444005146838
- filter: layers.25.
value: 0.8408012482479134
- filter: layers.26.
value: 0.021388326989086454
- filter: layers.27.
value: 0.8382317625625855
- filter: layers.28.
value: 0.0418890647830244
- filter: layers.29.
value: 0.16347844552378538
- filter: layers.30.
value: 0.3152357116732385
- filter: layers.31.
value: 0.24560833074353305
weight:
- filter: embed_token
value: 0.07729925635324639
- filter: model.norm.
value: 0.8178953444711945
- filter: lm_head
value: 0.8178953444711945
- filter: layers.0.
value: 0.7744866238001816
- filter: layers.1.
value: 0.9434402042495054
- filter: layers.2.
value: 0.7277347873270702
- filter: layers.3.
value: 0.7344542377948238
- filter: layers.4.
value: 0.7789550773026275
- filter: layers.5.
value: 0.9977070050825179
- filter: layers.6.
value: 0.9790331299458723
- filter: layers.7.
value: 0.936694814017198
- filter: layers.8.
value: 0.8022245662093622
- filter: layers.9.
value: 0.9207818987772527
- filter: layers.10.
value: 0.9177346250309115
- filter: layers.11.
value: 0.9705518504378707
- filter: layers.12.
value: 0.6962192295710468
- filter: layers.13.
value: 0.6601913931204128
- filter: layers.14.
value: 0.7700302171860424
- filter: layers.15.
value: 0.8764914331693985
- filter: layers.16.
value: 0.1680474521036102
- filter: layers.17.
value: 0.7252502118522168
- filter: layers.18.
value: 0.974742254778171
- filter: layers.19.
value: 0.6195931122778885
- filter: layers.20.
value: 0.8808789662609544
- filter: layers.21.
value: 0.10257515634421632
- filter: layers.22.
value: 0.04739908873803297
- filter: layers.23.
value: 0.9609916339993614
- filter: layers.24.
value: 0.919873783591387
- filter: layers.25.
value: 0.8920927224651306
- filter: layers.26.
value: 0.3163445951176943
- filter: layers.27.
value: 0.9183620719063076
- filter: layers.28.
value: 0.7696116230696886
- filter: layers.29.
value: 0.8092141535971384
- filter: layers.30.
value: 0.9831761728133218
- filter: layers.31.
value: 0.8368550180785097
- layer_range: [0, 32]
model:
model:
path: meta-llama/Meta-Llama-3-8B
parameters:
density:
- filter: embed_token
value: 0.17966590577034688
- filter: model.norm
value: 0.21103453748756532
- filter: lm_head
value: 0.21103453748756532
- filter: layers.0.
value: 0.1258111935267289
- filter: layers.1.
value: 0.14712518700082466
- filter: layers.2.
value: 0.15119918492525058
- filter: layers.3.
value: 0.2786737466113946
- filter: layers.4.
value: 0.7941421047966343
- filter: layers.5.
value: 0.14996187005944175
- filter: layers.6.
value: 0.21464495779281623
- filter: layers.7.
value: 0.9544388237818261
- filter: layers.8.
value: 0.6823542810137563
- filter: layers.9.
value: 0.12285240879708
- filter: layers.10.
value: 0.9560979622228943
- filter: layers.11.
value: 0.18774045206190226
- filter: layers.12.
value: 0.040667440606417166
- filter: layers.13.
value: 0.06577308115912563
- filter: layers.14.
value: 0.10898066770591856
- filter: layers.15.
value: 0.7988665902205875
- filter: layers.16.
value: 0.6902648842556923
- filter: layers.17.
value: 0.21929644905849668
- filter: layers.18.
value: 0.031818815520317414
- filter: layers.19.
value: 0.09373597067550557
- filter: layers.20.
value: 0.05663841107031176
- filter: layers.21.
value: 0.573566646668988
- filter: layers.22.
value: 0.2100277505470118
- filter: layers.23.
value: 0.14741793214499152
- filter: layers.24.
value: 0.13421247708500247
- filter: layers.25.
value: 0.3923811614341299
- filter: layers.26.
value: 0.7128080834275775
- filter: layers.27.
value: 0.7173419548629656
- filter: layers.28.
value: 0.25148835909796347
- filter: layers.29.
value: 0.01041679056502079
- filter: layers.30.
value: 0.04502964289907793
- filter: layers.31.
value: 0.7933910983298705
weight:
- filter: embed_token
value: 0.1379551165348899
- filter: model.norm.
value: 0.1145495237372664
- filter: lm_head
value: 0.1145495237372664
- filter: layers.0.
value: 0.058088970620673676
- filter: layers.1.
value: 0.9812897914515345
- filter: layers.2.
value: 0.7981160133946364
- filter: layers.3.
value: 0.7634141926997863
- filter: layers.4.
value: 0.16065704042361334
- filter: layers.5.
value: 0.11985625529888592
- filter: layers.6.
value: 0.12337476562239155
- filter: layers.7.
value: 0.24780333206355964
- filter: layers.8.
value: 0.031049663247481193
- filter: layers.9.
value: 0.9399411230471199
- filter: layers.10.
value: 0.059148520666240975
- filter: layers.11.
value: 0.7231307420560733
- filter: layers.12.
value: 0.8604431238597543
- filter: layers.13.
value: 0.7874665970878372
- filter: layers.14.
value: 0.18061875009463604
- filter: layers.15.
value: 0.09518784175500823
- filter: layers.16.
value: 0.27798323894923355
- filter: layers.17.
value: 0.9413932817490506
- filter: layers.18.
value: 0.2512810899514295
- filter: layers.19.
value: 0.11709350712163674
- filter: layers.20.
value: 0.6739674929530162
- filter: layers.21.
value: 0.21208528089924455
- filter: layers.22.
value: 0.811536887881144
- filter: layers.23.
value: 0.6200821391879456
- filter: layers.24.
value: 0.7663192062414923
- filter: layers.25.
value: 0.8657672217068108
- filter: layers.26.
value: 0.10630047742154969
- filter: layers.27.
value: 0.9641883915469491
- filter: layers.28.
value: 0.9807033575417018
- filter: layers.29.
value: 0.04810198321663159
- filter: layers.30.
value: 0.8249574979326807
- filter: layers.31.
value: 0.01564957234872716
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B", "meta-llama/Meta-Llama-3-8B-Instruct"], "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT"} | rmihaylov/Llama-3-DARE-v2-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:40:30+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-meta-llama/Meta-Llama-3-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Llama-3-DARE-v2-8B
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.
### Models Merged
The following models were included in the merge:
* meta-llama/Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Llama-3-DARE-v2-8B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-meta-llama/Meta-Llama-3-8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama-3-DARE-v2-8B\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3-8B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-classification | transformers | ## Metrics
- loss: 2.6228
- accuracy: 0.0782
- precision: 0.1449
- recall: 0.0782
- precision_macro: 0.0927
- recall_macro: 0.1351
- macro_fpr: 0.3725
- weighted_fpr: 0.4570
- weighted_specificity: 0.9156
- macro_specificity: 0.9329
- weighted_sensitivity: 0.0782
- macro_sensitivity: 0.1351
- f1_micro: 0.0782
- f1_macro: 0.0851
- f1_weighted: 0.0959
- runtime: 22.9054
- samples_per_second: 56.3620
- steps_per_second: 7.0730
# InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050
InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [xshubhamx/InLegalBERT](https://huggingface.co/xshubhamx/InLegalBERT)
* [xshubhamx/legal-bert-base-uncased](https://huggingface.co/xshubhamx/legal-bert-base-uncased)
## 🧩 Configuration
```yaml
models:
- model: xshubhamx/InLegalBERT
parameters:
density: 0.53
weight: 0.5
- model: xshubhamx/legal-bert-base-uncased
parameters:
density: 0.53
weight: 0.5
merge_method: ties
base_model: PoptropicaSahil/InLegalBERT-finetuned
parameters:
normalize: true
int8_mask: true
dtype: float16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "xshubhamx/InLegalBERT", "xshubhamx/legal-bert-base-uncased"]} | xshubhamx/InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"merge",
"mergekit",
"lazymergekit",
"xshubhamx/InLegalBERT",
"xshubhamx/legal-bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:40:42+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## Metrics
- loss: 2.6228
- accuracy: 0.0782
- precision: 0.1449
- recall: 0.0782
- precision_macro: 0.0927
- recall_macro: 0.1351
- macro_fpr: 0.3725
- weighted_fpr: 0.4570
- weighted_specificity: 0.9156
- macro_specificity: 0.9329
- weighted_sensitivity: 0.0782
- macro_sensitivity: 0.1351
- f1_micro: 0.0782
- f1_macro: 0.0851
- f1_weighted: 0.0959
- runtime: 22.9054
- samples_per_second: 56.3620
- steps_per_second: 7.0730
# InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050
InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:
* xshubhamx/InLegalBERT
* xshubhamx/legal-bert-base-uncased
## Configuration
| [
"## Metrics\n\n- loss: 2.6228\n- accuracy: 0.0782\n- precision: 0.1449\n- recall: 0.0782\n- precision_macro: 0.0927\n- recall_macro: 0.1351\n- macro_fpr: 0.3725\n- weighted_fpr: 0.4570\n- weighted_specificity: 0.9156\n- macro_specificity: 0.9329\n- weighted_sensitivity: 0.0782\n- macro_sensitivity: 0.1351\n- f1_micro: 0.0782\n- f1_macro: 0.0851\n- f1_weighted: 0.0959\n- runtime: 22.9054\n- samples_per_second: 56.3620\n- steps_per_second: 7.0730",
"# InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050\n\nInLegalDIFF-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Metrics\n\n- loss: 2.6228\n- accuracy: 0.0782\n- precision: 0.1449\n- recall: 0.0782\n- precision_macro: 0.0927\n- recall_macro: 0.1351\n- macro_fpr: 0.3725\n- weighted_fpr: 0.4570\n- weighted_specificity: 0.9156\n- macro_specificity: 0.9329\n- weighted_sensitivity: 0.0782\n- macro_sensitivity: 0.1351\n- f1_micro: 0.0782\n- f1_macro: 0.0851\n- f1_weighted: 0.0959\n- runtime: 22.9054\n- samples_per_second: 56.3620\n- steps_per_second: 7.0730",
"# InLegalDIFF-InLegal-legal-merge-ties-d-053-w-050\n\nInLegalDIFF-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
SMM4H Task 6 : Model for reddit texts
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | 1024m/SMM4H-Task6-BartL-A20 | null | [
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:42:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bart #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
SMM4H Task 6 : Model for reddit texts
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nSMM4H Task 6 : Model for reddit texts",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bart #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\n\n\nSMM4H Task 6 : Model for reddit texts",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hotsuyuki/gpt_10B_global_step10 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:43:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ProgrammierAdri/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | ProgrammierAdri/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-20T17:45:15+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7562 | 0.2193 | 25 | 1.5712 |
| 1.5697 | 0.4386 | 50 | 1.4884 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize", "results": []}]} | Nithin29/mistral7binstruct_summarize | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-20T17:45:31+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| mistral7binstruct\_summarize
============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4884
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ProgrammierAdri/Taxler", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxler", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | ProgrammierAdri/Taxler | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-20T17:48:27+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-2 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/phi-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q2_K.gguf) | Q2_K | 1.03GB |
| [phi-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.IQ3_XS.gguf) | IQ3_XS | 1.12GB |
| [phi-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.IQ3_S.gguf) | IQ3_S | 1.16GB |
| [phi-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [phi-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.IQ3_M.gguf) | IQ3_M | 1.23GB |
| [phi-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q3_K.gguf) | Q3_K | 1.33GB |
| [phi-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q3_K_M.gguf) | Q3_K_M | 1.33GB |
| [phi-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q3_K_L.gguf) | Q3_K_L | 1.47GB |
| [phi-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [phi-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q4_0.gguf) | Q4_0 | 1.49GB |
| [phi-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [phi-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q4_K_S.gguf) | Q4_K_S | 1.51GB |
| [phi-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q4_K.gguf) | Q4_K | 1.62GB |
| [phi-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q4_K_M.gguf) | Q4_K_M | 1.62GB |
| [phi-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q4_1.gguf) | Q4_1 | 1.65GB |
| [phi-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q5_0.gguf) | Q5_0 | 1.8GB |
| [phi-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [phi-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q5_K.gguf) | Q5_K | 1.87GB |
| [phi-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q5_K_M.gguf) | Q5_K_M | 1.87GB |
| [phi-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q5_1.gguf) | Q5_1 | 1.95GB |
| [phi-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_phi-2-gguf/blob/main/phi-2.Q6_K.gguf) | Q6_K | 2.13GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 was integrated in `transformers` version 4.37. If you need to use an earlier version, you need to pass `trust_remote_code=True` to the `from_pretrained()` function.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the [PhiAttention.forward()](https://huggingface.co/microsoft/phi-2/blob/main/modeling_phi.py#L306) function.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {} | RichardErkhov/microsoft_-_phi-2-gguf | null | [
"gguf",
"region:us"
] | null | 2024-04-20T17:50:04+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
phi-2 - GGUF
* Model creator: URL
* Original model: URL
Name: phi-2.Q2\_K.gguf, Quant method: Q2\_K, Size: 1.03GB
Name: phi-2.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 1.12GB
Name: phi-2.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 1.16GB
Name: phi-2.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 1.16GB
Name: phi-2.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 1.23GB
Name: phi-2.Q3\_K.gguf, Quant method: Q3\_K, Size: 1.33GB
Name: phi-2.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 1.33GB
Name: phi-2.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 1.47GB
Name: phi-2.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 1.43GB
Name: phi-2.Q4\_0.gguf, Quant method: Q4\_0, Size: 1.49GB
Name: phi-2.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 1.5GB
Name: phi-2.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 1.51GB
Name: phi-2.Q4\_K.gguf, Quant method: Q4\_K, Size: 1.62GB
Name: phi-2.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 1.62GB
Name: phi-2.Q4\_1.gguf, Quant method: Q4\_1, Size: 1.65GB
Name: phi-2.Q5\_0.gguf, Quant method: Q5\_0, Size: 1.8GB
Name: phi-2.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 1.8GB
Name: phi-2.Q5\_K.gguf, Quant method: Q5\_K, Size: 1.87GB
Name: phi-2.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 1.87GB
Name: phi-2.Q5\_1.gguf, Quant method: Q5\_1, Size: 1.95GB
Name: phi-2.Q6\_K.gguf, Quant method: Q6\_K, Size: 2.13GB
Original model description:
---------------------------
license: mit
license\_link: URL
language:
* en
pipeline\_tag: text-generation
tags:
* nlp
* code
---
Model Summary
-------------
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
How to Use
----------
Phi-2 was integrated in 'transformers' version 4.37. If you need to use an earlier version, you need to pass 'trust\_remote\_code=True' to the 'from\_pretrained()' function.
Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the PhiAttention.forward() function.
Intended Uses
-------------
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: <prompt>\nOutput:"
where the model generates the text after "Output:".
### Chat Format:
where the model generates the text after the first "Bob:".
### Code Format:
where the model generates the text after the comments.
Notes:
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using 'transformers<4.37.0', always load the model with 'trust\_remote\_code=True' to prevent side-effects.
Sample Code
-----------
Limitations of Phi-2
--------------------
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
Training
--------
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* PyTorch
* DeepSpeed
* Flash-Attention
### License
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### QA Format:\n\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" .\nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: <prompt>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\nwhere the model generates the text after the comments.\n\n\nNotes:\n\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-2\n--------------------\n\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Context length: 2048 tokens\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n* Training tokens: 1.4T tokens\n* GPUs: 96xA100-80G\n* Training time: 14 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#gguf #region-us \n",
"### QA Format:\n\n\nYou can provide the prompt as a standalone question as follows:\n\n\nwhere the model generates the text after \".\" .\nTo encourage the model to write more concise answers, you can also try the following QA format using \"Instruct: <prompt>\\nOutput:\"\n\n\nwhere the model generates the text after \"Output:\".",
"### Chat Format:\n\n\nwhere the model generates the text after the first \"Bob:\".",
"### Code Format:\n\n\nwhere the model generates the text after the comments.\n\n\nNotes:\n\n\n* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.\n* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.\n* If you are using 'transformers<4.37.0', always load the model with 'trust\\_remote\\_code=True' to prevent side-effects.\n\n\nSample Code\n-----------\n\n\nLimitations of Phi-2\n--------------------\n\n\n* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.\n* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.\n* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.\n* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.\n* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.\n* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: a Transformer-based model with next-word prediction objective\n* Context length: 2048 tokens\n* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.\n* Training tokens: 1.4T tokens\n* GPUs: 96xA100-80G\n* Training time: 14 days",
"### Software\n\n\n* PyTorch\n* DeepSpeed\n* Flash-Attention",
"### License\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-to-image | diffusers |
# AutoTrain SDXL LoRA DreamBooth - leonickson1/urec_building_uncc
<Gallery />
## Model description
These are leonickson1/urec_building_uncc LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a sks building to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](leonickson1/urec_building_uncc/tree/main) them in the Files & versions tab.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "photo of a sks building"} | leonickson1/duke_building_uncc | null | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-20T17:52:02+00:00 | [] | [] | TAGS
#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# AutoTrain SDXL LoRA DreamBooth - leonickson1/urec_building_uncc
<Gallery />
## Model description
These are leonickson1/urec_building_uncc LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of a sks building to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# AutoTrain SDXL LoRA DreamBooth - leonickson1/urec_building_uncc\n\n<Gallery />",
"## Model description\n\nThese are leonickson1/urec_building_uncc LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of a sks building to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# AutoTrain SDXL LoRA DreamBooth - leonickson1/urec_building_uncc\n\n<Gallery />",
"## Model description\n\nThese are leonickson1/urec_building_uncc LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use photo of a sks building to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english_2_spanish_model_2
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2029
- Bleu: 0.8105
- Gen Len: 18.1384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.5019 | 1.0 | 4674 | 2.2298 | 0.7656 | 18.1612 |
| 2.4834 | 2.0 | 9348 | 2.2029 | 0.8105 | 18.1384 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "google-t5/t5-small", "model-index": [{"name": "english_2_spanish_model_2", "results": []}]} | TigerUppercut77/english_2_spanish_model_2 | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:52:39+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| english\_2\_spanish\_model\_2
=============================
This model is a fine-tuned version of google-t5/t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2029
* Bleu: 0.8105
* Gen Len: 18.1384
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.31.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.31.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | deepnet/SN6-30L1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T17:54:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-7b-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-7b-v0.1/
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-7b-v0.1-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2404.07965",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-20T17:54:27+00:00 | [
"2404.07965"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-7b-v0.1 - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T17:55:57+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null | peft |
# gemma-2b-lahacks 💻
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it).
It achieves the following results on the evaluation set:
- Loss: 2.3061
## Model description 📝
This model was fine-tuned during LAHacks 2024, the intention of this model is to be able to diagnose a patient appropratiely
based on the information in their previous medical records, current symptoms, age, sex, and more.
## Intended uses & limitations ⁉️
Code inference sample:
```py
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
config = PeftConfig.from_pretrained("LaZeAsh/gemma-2b-lahacks")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
model = PeftModel.from_pretrained(model, "LaZeAsh/gemma-2b-lahacks")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
prompt = "I feel cold I most likely have a "
input_ids = tokenizer.encode(prompt, return_tensors = 'pt')
output = model.generate(input_ids, max_length=50, num_return_sequences=1)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
Uses: To use Artificial Intelligence technology to diagnose patient based off of multiple parameters ranging from their age to their
medical record.
Limitation: There's a highly likelyhood that the model will NOT be great at diagnosing it's users, the amount of time it took to fine-tune
this model limited how much data we could train it on. With more time a more accurate model would be expected.
## Training and evaluation data 📈
The model was trained on data from the research paper 'A New Dataset For Automatic Medical Diagnosis' by Arsène Fansi Tchango, Rishab Goel,
Zhi Wen, Julien Martel, Joumana Ghosn. The 'release_train_patients.csv' dataset was reduced from it's original 1.3 million rows of data to a
mere 500-1000 rows of data. This was due to the time it took to fine-tune a model which depended on how big the dataset provided was.
## Training procedure 🏋️
The fine-tuning took MULTIPLE, and I mean MULTIPLE tries. Sometimes the dataset provided was very big so the kernel had to be restarted multiple times.
Additionally, the model was tuned on the default data that Intel offers in their guide to fine-tune a gemma model.
### Training hyperparameters 🔍
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 140
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3089 | 3.5714 | 100 | 2.3061 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b-it", "model-index": [{"name": "gemma-2b-lahacks", "results": []}]} | LaZeAsh/gemma-2b-lahacks | null | [
"peft",
"safetensors",
"gemma",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | 2024-04-20T17:56:27+00:00 | [] | [] | TAGS
#peft #safetensors #gemma #trl #sft #generated_from_trainer #base_model-google/gemma-2b-it #license-gemma #region-us
| gemma-2b-lahacks
================
This model is a fine-tuned version of google/gemma-2b-it.
It achieves the following results on the evaluation set:
* Loss: 2.3061
Model description
-----------------
This model was fine-tuned during LAHacks 2024, the intention of this model is to be able to diagnose a patient appropratiely
based on the information in their previous medical records, current symptoms, age, sex, and more.
Intended uses & limitations ⁉️
------------------------------
Code inference sample:
Uses: To use Artificial Intelligence technology to diagnose patient based off of multiple parameters ranging from their age to their
medical record.
Limitation: There's a highly likelyhood that the model will NOT be great at diagnosing it's users, the amount of time it took to fine-tune
this model limited how much data we could train it on. With more time a more accurate model would be expected.
Training and evaluation data
----------------------------
The model was trained on data from the research paper 'A New Dataset For Automatic Medical Diagnosis' by Arsène Fansi Tchango, Rishab Goel,
Zhi Wen, Julien Martel, Joumana Ghosn. The 'release\_train\_patients.csv' dataset was reduced from it's original 1.3 million rows of data to a
mere 500-1000 rows of data. This was due to the time it took to fine-tune a model which depended on how big the dataset provided was.
Training procedure ️
--------------------
The fine-tuning took MULTIPLE, and I mean MULTIPLE tries. Sometimes the dataset provided was very big so the kernel had to be restarted multiple times.
Additionally, the model was tuned on the default data that Intel offers in their guide to fine-tune a gemma model.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 140
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.0.1a0+URL
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 140\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.0.1a0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #gemma #trl #sft #generated_from_trainer #base_model-google/gemma-2b-it #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 140\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.0.1a0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-v2", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "8.40 +/- 10.80", "name": "mean_reward", "verified": false}]}]}]} | tarpalsus/Reinforce-Pixelcopter-v2 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-20T17:56:33+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_sum_04-20-17-50-40
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan_sum_04-20-17-50-40", "results": []}]} | reeddg/flan_sum_04-20-17-50-40 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-20T17:57:00+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us
|
# flan_sum_04-20-17-50-40
This model is a fine-tuned version of google/flan-t5-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| [
"# flan_sum_04-20-17-50-40\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us \n",
"# flan_sum_04-20-17-50-40\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.13.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6983
- Wer: 0.4948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 5.491 | 1.8059 | 400 | 1.3703 | 0.9679 |
| 0.6981 | 3.6117 | 800 | 0.7041 | 0.6607 |
| 0.3758 | 5.4176 | 1200 | 0.6709 | 0.6185 |
| 0.2736 | 7.2235 | 1600 | 0.7170 | 0.5925 |
| 0.2089 | 9.0293 | 2000 | 0.6445 | 0.5722 |
| 0.1686 | 10.8352 | 2400 | 0.7004 | 0.5770 |
| 0.1408 | 12.6411 | 2800 | 0.7097 | 0.5735 |
| 0.1227 | 14.4470 | 3200 | 0.6763 | 0.5533 |
| 0.1056 | 16.2528 | 3600 | 0.7245 | 0.5484 |
| 0.0923 | 18.0587 | 4000 | 0.7198 | 0.5480 |
| 0.083 | 19.8646 | 4400 | 0.6568 | 0.5251 |
| 0.0742 | 21.6704 | 4800 | 0.7183 | 0.5252 |
| 0.0647 | 23.4763 | 5200 | 0.7306 | 0.5141 |
| 0.0574 | 25.2822 | 5600 | 0.7236 | 0.5063 |
| 0.052 | 27.0880 | 6000 | 0.7234 | 0.4969 |
| 0.0478 | 28.8939 | 6400 | 0.6983 | 0.4948 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_16_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-colab", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_16_0", "type": "common_voice_16_0", "config": "hi", "split": "test", "args": "hi"}, "metrics": [{"type": "wer", "value": 0.4948465637275874, "name": "Wer"}]}]}]} | SharmilaAnanthasayanam/wav2vec2-large-xls-r-300m-hi-colab | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:57:06+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-large-xls-r-300m-hi-colab
==================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice\_16\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6983
* Wer: 0.4948
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | fastai |
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
| {"tags": ["fastai"]} | osrojo/futurama | null | [
"fastai",
"has_space",
"region:us"
] | null | 2024-04-20T17:57:07+00:00 | [] | [] | TAGS
#fastai #has_space #region-us
|
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the documentation here)!
2. Create a demo in Gradio or Streamlit using Spaces (documentation here).
3. Join the fastai community on the Fastai Discord!
Greetings fellow fastlearner ! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
| [
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] | [
"TAGS\n#fastai #has_space #region-us \n",
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | xshubhamx/REAL-InLegalBERT | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:58:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | tdooms/TinyStories-1-1024 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T17:59:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# queer-bert
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3661
- Accuracy: 0.8438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3555 | 1.0 | 6240 | 0.3386 | 0.8393 |
| 0.2497 | 2.0 | 12480 | 0.3661 | 0.8438 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-cased", "model-index": [{"name": "queer-bert", "results": []}]} | savinda99/queer-bert | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:00:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| queer-bert
==========
This model is a fine-tuned version of google-bert/bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3661
* Accuracy: 0.8438
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
Thanks to mradermacher, there are also GGUF quants (Q2_K-Q8_K & IQ3_XS-IQ4_XS) for this model available here: https://huggingface.co/mradermacher/Llama3-OrpoSmaug-Slerp-8B-GGUF
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Llama3 Orpo Smaug - Slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* H:\merge\Llama-3-Smaug-8B
* H:\merge\OrpoLlama-3-8b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: H:\merge\Llama-3-Smaug-8B
layer_range: [0, 32]
- model: H:\merge\OrpoLlama-3-8b
layer_range: [0, 32]
merge_method: slerp
base_model: H:\merge\Llama-3-Smaug-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {} | WesPro/Llama3-OrpoSmaug-Slerp-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:00:24+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Thanks to mradermacher, there are also GGUF quants (Q2_K-Q8_K & IQ3_XS-IQ4_XS) for this model available here: URL
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Llama3 Orpo Smaug - Slerp
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* H:\merge\Llama-3-Smaug-8B
* H:\merge\OrpoLlama-3-8b
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Llama3 Orpo Smaug - Slerp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* H:\\merge\\Llama-3-Smaug-8B\n* H:\\merge\\OrpoLlama-3-8b",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama3 Orpo Smaug - Slerp\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* H:\\merge\\Llama-3-Smaug-8B\n* H:\\merge\\OrpoLlama-3-8b",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
feature-extraction | sentence-transformers |
**新闻 | News**
**[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。
**[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。
**[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见!
## stella model
stella是一个通用的文本编码模型,主要有以下模型:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。
**训练数据:**
1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本
2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据
**训练方法:**
1. 对比学习损失函数
2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例)
3. EWC(Elastic Weights Consolidation)[4]
4. cosent loss[5]
5. 每一种类型的数据一个迭代器,分别计算loss进行更新
stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction(
比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。
**初始权重:**\
stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position
embedding使用层次分解位置编码[7]进行初始化。\
感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。
stella is a general-purpose text encoder, which mainly includes the following models:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
The training data mainly includes:
1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater
than 512.
2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM.
The loss functions mainly include:
1. Contrastive learning loss function
2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives)
3. EWC (Elastic Weights Consolidation)
4. cosent loss
Model weight initialization:\
stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the
512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding.
Training strategy:\
One iterator for each type of data, separately calculating the loss.
Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation.
## Metric
#### C-MTEB leaderboard (Chinese)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:|
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 |
| stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 |
| stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 |
#### MTEB leaderboard (English)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) |
|:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 |
#### Reproduce our results
**C-MTEB:**
```python
import torch
import numpy as np
from typing import List
from mteb import MTEB
from sentence_transformers import SentenceTransformer
class FastTextEncoder():
def __init__(self, model_name):
self.model = SentenceTransformer(model_name).cuda().half().eval()
self.model.max_seq_length = 512
def encode(
self,
input_texts: List[str],
*args,
**kwargs
):
new_sens = list(set(input_texts))
new_sens.sort(key=lambda x: len(x), reverse=True)
vecs = self.model.encode(
new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256
).astype(np.float32)
sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)}
vecs = vecs[[sen2arrid[sen] for sen in input_texts]]
torch.cuda.empty_cache()
return vecs
if __name__ == '__main__':
model_name = "infgrad/stella-base-zh-v2"
output_folder = "zh_mteb_results/stella-base-zh-v2"
task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks]
model = FastTextEncoder(model_name)
for task in task_names:
MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder)
```
**MTEB:**
You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py)
#### Evaluation for long text
经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,
更致命的是那些长度大于512的文本,其重点都在前半部分
这里以CMRC2018的数据为例说明这个问题:
```
question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏?
passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推......
```
passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\
简言之,现有数据集的2个问题:\
1)长度大于512的过少\
2)即便大于512,对于检索而言也只需要前512的文本内容\
导致**无法准确评估模型的长文本编码能力。**
为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:
- CMRC2018,通用百科
- CAIL,法律阅读理解
- DRCD,繁体百科,已转简体
- Military,军工问答
- Squad,英文阅读理解,已转中文
- Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9]
处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。
除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing
评测指标为Recall@5, 结果如下:
| Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh |
|:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:|
| CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 |
| CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 |
| DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 |
| Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 |
| Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 |
| Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 |
| **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** |
**注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。
## Usage
#### stella 中文系列模型
stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致**
,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。
stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。
stella中文系列模型均使用mean pooling做为文本向量。
在sentence-transformer库中的使用方法:
```python
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('infgrad/stella-base-zh-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
直接使用transformers库:
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2')
sentences = ["数据1", "数据ABCDEFGH"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=1024,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
#### stella models for English
**Using Sentence-Transformers:**
```python
from sentence_transformers import SentenceTransformer
sentences = ["one car come", "one car go"]
model = SentenceTransformer('infgrad/stella-base-en-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
**Using HuggingFace Transformers:**
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-en-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2')
sentences = ["one car come", "one car go"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=512,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
## Training Detail
**硬件:** 单卡A100-80GB
**环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing
**学习率:** 1e-6
**batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例
**数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。
## ToDoList
**评测的稳定性:**
评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。
**更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。
测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。
**OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,
它们的效果均比不上BM25。
## Reference
1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab
2. https://github.com/wangyuxinwhy/uniem
3. https://github.com/CLUEbenchmark/SimCLUE
4. https://arxiv.org/abs/1612.00796
5. https://kexue.fm/archives/8847
6. https://huggingface.co/sensenova/piccolo-base-zh
7. https://kexue.fm/archives/7947
8. https://github.com/FlagOpen/FlagEmbedding
9. https://github.com/THUDM/LongBench
| {"language": ["en"], "license": "mit", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "model-index": [{"name": "stella-base-en-v2", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 77.19402985074628}, {"type": "ap", "value": 40.43267503017359}, {"type": "f1", "value": 71.15585210518594}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 93.256675}, {"type": "ap", "value": 90.00824833079179}, {"type": "f1", "value": 93.2473146151734}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 49.612}, {"type": "f1", "value": 48.530785631574304}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.411}, {"type": "map_at_10", "value": 52.673}, {"type": "map_at_100", "value": 53.410999999999994}, {"type": "map_at_1000", "value": 53.415}, {"type": "map_at_3", "value": 48.495}, {"type": "map_at_5", "value": 51.183}, {"type": "mrr_at_1", "value": 37.838}, {"type": "mrr_at_10", "value": 52.844}, {"type": "mrr_at_100", "value": 53.581999999999994}, {"type": "mrr_at_1000", "value": 53.586}, {"type": "mrr_at_3", "value": 48.672}, {"type": "mrr_at_5", "value": 51.272}, {"type": "ndcg_at_1", "value": 37.411}, {"type": "ndcg_at_10", "value": 60.626999999999995}, {"type": "ndcg_at_100", "value": 63.675000000000004}, {"type": "ndcg_at_1000", "value": 63.776999999999994}, {"type": "ndcg_at_3", "value": 52.148}, {"type": "ndcg_at_5", "value": 57.001999999999995}, {"type": "precision_at_1", "value": 37.411}, {"type": "precision_at_10", "value": 8.578}, {"type": "precision_at_100", "value": 0.989}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 20.91}, {"type": "precision_at_5", "value": 14.908}, {"type": "recall_at_1", "value": 37.411}, {"type": "recall_at_10", "value": 85.775}, {"type": "recall_at_100", "value": 98.86200000000001}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 62.731}, {"type": "recall_at_5", "value": 74.53800000000001}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.24219029437865}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 40.474604844291726}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.720542706366054}, {"type": "mrr", "value": 75.59633733456448}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.31345008397868}, {"type": "cos_sim_spearman", "value": 85.94292212320399}, {"type": "euclidean_pearson", "value": 85.03974302774525}, {"type": "euclidean_spearman", "value": 85.88087251659051}, {"type": "manhattan_pearson", "value": 84.91900996712951}, {"type": "manhattan_spearman", "value": 85.96701905781116}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.72727272727273}, {"type": "f1", "value": 84.29572512364581}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 39.55532460397536}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 35.91195973591251}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 32.822}, {"type": "map_at_10", "value": 44.139}, {"type": "map_at_100", "value": 45.786}, {"type": "map_at_1000", "value": 45.906000000000006}, {"type": "map_at_3", "value": 40.637}, {"type": "map_at_5", "value": 42.575}, {"type": "mrr_at_1", "value": 41.059}, {"type": "mrr_at_10", "value": 50.751000000000005}, {"type": "mrr_at_100", "value": 51.548}, {"type": "mrr_at_1000", "value": 51.583999999999996}, {"type": "mrr_at_3", "value": 48.236000000000004}, {"type": "mrr_at_5", "value": 49.838}, {"type": "ndcg_at_1", "value": 41.059}, {"type": "ndcg_at_10", "value": 50.573}, {"type": "ndcg_at_100", "value": 56.25}, {"type": "ndcg_at_1000", "value": 58.004}, {"type": "ndcg_at_3", "value": 45.995000000000005}, {"type": "ndcg_at_5", "value": 48.18}, {"type": "precision_at_1", "value": 41.059}, {"type": "precision_at_10", "value": 9.757}, {"type": "precision_at_100", "value": 1.609}, {"type": "precision_at_1000", "value": 0.20600000000000002}, {"type": "precision_at_3", "value": 22.222}, {"type": "precision_at_5", "value": 16.023}, {"type": "recall_at_1", "value": 32.822}, {"type": "recall_at_10", "value": 61.794000000000004}, {"type": "recall_at_100", "value": 85.64699999999999}, {"type": "recall_at_1000", "value": 96.836}, {"type": "recall_at_3", "value": 47.999}, {"type": "recall_at_5", "value": 54.376999999999995}, {"type": "map_at_1", "value": 29.579}, {"type": "map_at_10", "value": 39.787}, {"type": "map_at_100", "value": 40.976}, {"type": "map_at_1000", "value": 41.108}, {"type": "map_at_3", "value": 36.819}, {"type": "map_at_5", "value": 38.437}, {"type": "mrr_at_1", "value": 37.516}, {"type": "mrr_at_10", "value": 45.822}, {"type": "mrr_at_100", "value": 46.454}, {"type": "mrr_at_1000", "value": 46.495999999999995}, {"type": "mrr_at_3", "value": 43.556}, {"type": "mrr_at_5", "value": 44.814}, {"type": "ndcg_at_1", "value": 37.516}, {"type": "ndcg_at_10", "value": 45.5}, {"type": "ndcg_at_100", "value": 49.707}, {"type": "ndcg_at_1000", "value": 51.842}, {"type": "ndcg_at_3", "value": 41.369}, {"type": "ndcg_at_5", "value": 43.161}, {"type": "precision_at_1", "value": 37.516}, {"type": "precision_at_10", "value": 8.713}, {"type": "precision_at_100", "value": 1.38}, {"type": "precision_at_1000", "value": 0.188}, {"type": "precision_at_3", "value": 20.233999999999998}, {"type": "precision_at_5", "value": 14.280000000000001}, {"type": "recall_at_1", "value": 29.579}, {"type": "recall_at_10", "value": 55.458}, {"type": "recall_at_100", "value": 73.49799999999999}, {"type": "recall_at_1000", "value": 87.08200000000001}, {"type": "recall_at_3", "value": 42.858000000000004}, {"type": "recall_at_5", "value": 48.215}, {"type": "map_at_1", "value": 40.489999999999995}, {"type": "map_at_10", "value": 53.313}, {"type": "map_at_100", "value": 54.290000000000006}, {"type": "map_at_1000", "value": 54.346000000000004}, {"type": "map_at_3", "value": 49.983}, {"type": "map_at_5", "value": 51.867}, {"type": "mrr_at_1", "value": 46.27}, {"type": "mrr_at_10", "value": 56.660999999999994}, {"type": "mrr_at_100", "value": 57.274}, {"type": "mrr_at_1000", "value": 57.301}, {"type": "mrr_at_3", "value": 54.138}, {"type": "mrr_at_5", "value": 55.623999999999995}, {"type": "ndcg_at_1", "value": 46.27}, {"type": "ndcg_at_10", "value": 59.192}, {"type": "ndcg_at_100", "value": 63.026}, {"type": "ndcg_at_1000", "value": 64.079}, {"type": "ndcg_at_3", "value": 53.656000000000006}, {"type": "ndcg_at_5", "value": 56.387}, {"type": "precision_at_1", "value": 46.27}, {"type": "precision_at_10", "value": 9.511}, {"type": "precision_at_100", "value": 1.23}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 24.096}, {"type": "precision_at_5", "value": 16.476}, {"type": "recall_at_1", "value": 40.489999999999995}, {"type": "recall_at_10", "value": 73.148}, {"type": "recall_at_100", "value": 89.723}, {"type": "recall_at_1000", "value": 97.073}, {"type": "recall_at_3", "value": 58.363}, {"type": "recall_at_5", "value": 65.083}, {"type": "map_at_1", "value": 26.197}, {"type": "map_at_10", "value": 35.135}, {"type": "map_at_100", "value": 36.14}, {"type": "map_at_1000", "value": 36.216}, {"type": "map_at_3", "value": 32.358}, {"type": "map_at_5", "value": 33.814}, {"type": "mrr_at_1", "value": 28.475}, {"type": "mrr_at_10", "value": 37.096000000000004}, {"type": "mrr_at_100", "value": 38.006}, {"type": "mrr_at_1000", "value": 38.06}, {"type": "mrr_at_3", "value": 34.52}, {"type": "mrr_at_5", "value": 35.994}, {"type": "ndcg_at_1", "value": 28.475}, {"type": "ndcg_at_10", "value": 40.263}, {"type": "ndcg_at_100", "value": 45.327}, {"type": "ndcg_at_1000", "value": 47.225}, {"type": "ndcg_at_3", "value": 34.882000000000005}, {"type": "ndcg_at_5", "value": 37.347}, {"type": "precision_at_1", "value": 28.475}, {"type": "precision_at_10", "value": 6.249}, {"type": "precision_at_100", "value": 0.919}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 14.689}, {"type": "precision_at_5", "value": 10.237}, {"type": "recall_at_1", "value": 26.197}, {"type": "recall_at_10", "value": 54.17999999999999}, {"type": "recall_at_100", "value": 77.768}, {"type": "recall_at_1000", "value": 91.932}, {"type": "recall_at_3", "value": 39.804}, {"type": "recall_at_5", "value": 45.660000000000004}, {"type": "map_at_1", "value": 16.683}, {"type": "map_at_10", "value": 25.013999999999996}, {"type": "map_at_100", "value": 26.411}, {"type": "map_at_1000", "value": 26.531}, {"type": "map_at_3", "value": 22.357}, {"type": "map_at_5", "value": 23.982999999999997}, {"type": "mrr_at_1", "value": 20.896}, {"type": "mrr_at_10", "value": 29.758000000000003}, {"type": "mrr_at_100", "value": 30.895}, {"type": "mrr_at_1000", "value": 30.964999999999996}, {"type": "mrr_at_3", "value": 27.177}, {"type": "mrr_at_5", "value": 28.799999999999997}, {"type": "ndcg_at_1", "value": 20.896}, {"type": "ndcg_at_10", "value": 30.294999999999998}, {"type": "ndcg_at_100", "value": 36.68}, {"type": "ndcg_at_1000", "value": 39.519}, {"type": "ndcg_at_3", "value": 25.480999999999998}, {"type": "ndcg_at_5", "value": 28.027}, {"type": "precision_at_1", "value": 20.896}, {"type": "precision_at_10", "value": 5.56}, {"type": "precision_at_100", "value": 1.006}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 12.231}, {"type": "precision_at_5", "value": 9.104}, {"type": "recall_at_1", "value": 16.683}, {"type": "recall_at_10", "value": 41.807}, {"type": "recall_at_100", "value": 69.219}, {"type": "recall_at_1000", "value": 89.178}, {"type": "recall_at_3", "value": 28.772}, {"type": "recall_at_5", "value": 35.167}, {"type": "map_at_1", "value": 30.653000000000002}, {"type": "map_at_10", "value": 41.21}, {"type": "map_at_100", "value": 42.543}, {"type": "map_at_1000", "value": 42.657000000000004}, {"type": "map_at_3", "value": 38.094}, {"type": "map_at_5", "value": 39.966}, {"type": "mrr_at_1", "value": 37.824999999999996}, {"type": "mrr_at_10", "value": 47.087}, {"type": "mrr_at_100", "value": 47.959}, {"type": "mrr_at_1000", "value": 48.003}, {"type": "mrr_at_3", "value": 45.043}, {"type": "mrr_at_5", "value": 46.352}, {"type": "ndcg_at_1", "value": 37.824999999999996}, {"type": "ndcg_at_10", "value": 47.158}, {"type": "ndcg_at_100", "value": 52.65}, {"type": "ndcg_at_1000", "value": 54.644999999999996}, {"type": "ndcg_at_3", "value": 42.632999999999996}, {"type": "ndcg_at_5", "value": 44.994}, {"type": "precision_at_1", "value": 37.824999999999996}, {"type": "precision_at_10", "value": 8.498999999999999}, {"type": "precision_at_100", "value": 1.308}, {"type": "precision_at_1000", "value": 0.166}, {"type": "precision_at_3", "value": 20.308}, {"type": "precision_at_5", "value": 14.283000000000001}, {"type": "recall_at_1", "value": 30.653000000000002}, {"type": "recall_at_10", "value": 58.826}, {"type": "recall_at_100", "value": 81.94}, {"type": "recall_at_1000", "value": 94.71000000000001}, {"type": "recall_at_3", "value": 45.965}, {"type": "recall_at_5", "value": 52.294}, {"type": "map_at_1", "value": 26.71}, {"type": "map_at_10", "value": 36.001}, {"type": "map_at_100", "value": 37.416}, {"type": "map_at_1000", "value": 37.522}, {"type": "map_at_3", "value": 32.841}, {"type": "map_at_5", "value": 34.515}, {"type": "mrr_at_1", "value": 32.647999999999996}, {"type": "mrr_at_10", "value": 41.43}, {"type": "mrr_at_100", "value": 42.433}, {"type": "mrr_at_1000", "value": 42.482}, {"type": "mrr_at_3", "value": 39.117000000000004}, {"type": "mrr_at_5", "value": 40.35}, {"type": "ndcg_at_1", "value": 32.647999999999996}, {"type": "ndcg_at_10", "value": 41.629}, {"type": "ndcg_at_100", "value": 47.707}, {"type": "ndcg_at_1000", "value": 49.913000000000004}, {"type": "ndcg_at_3", "value": 36.598000000000006}, {"type": "ndcg_at_5", "value": 38.696000000000005}, {"type": "precision_at_1", "value": 32.647999999999996}, {"type": "precision_at_10", "value": 7.704999999999999}, {"type": "precision_at_100", "value": 1.242}, {"type": "precision_at_1000", "value": 0.16}, {"type": "precision_at_3", "value": 17.314}, {"type": "precision_at_5", "value": 12.374}, {"type": "recall_at_1", "value": 26.71}, {"type": "recall_at_10", "value": 52.898}, {"type": "recall_at_100", "value": 79.08}, {"type": "recall_at_1000", "value": 93.94}, {"type": "recall_at_3", "value": 38.731}, {"type": "recall_at_5", "value": 44.433}, {"type": "map_at_1", "value": 26.510999999999996}, {"type": "map_at_10", "value": 35.755333333333326}, {"type": "map_at_100", "value": 36.97525}, {"type": "map_at_1000", "value": 37.08741666666667}, {"type": "map_at_3", "value": 32.921}, {"type": "map_at_5", "value": 34.45041666666667}, {"type": "mrr_at_1", "value": 31.578416666666666}, {"type": "mrr_at_10", "value": 40.06066666666667}, {"type": "mrr_at_100", "value": 40.93350000000001}, {"type": "mrr_at_1000", "value": 40.98716666666667}, {"type": "mrr_at_3", "value": 37.710499999999996}, {"type": "mrr_at_5", "value": 39.033249999999995}, {"type": "ndcg_at_1", "value": 31.578416666666666}, {"type": "ndcg_at_10", "value": 41.138666666666666}, {"type": "ndcg_at_100", "value": 46.37291666666666}, {"type": "ndcg_at_1000", "value": 48.587500000000006}, {"type": "ndcg_at_3", "value": 36.397083333333335}, {"type": "ndcg_at_5", "value": 38.539}, {"type": "precision_at_1", "value": 31.578416666666666}, {"type": "precision_at_10", "value": 7.221583333333332}, {"type": "precision_at_100", "value": 1.1581666666666668}, {"type": "precision_at_1000", "value": 0.15416666666666667}, {"type": "precision_at_3", "value": 16.758}, {"type": "precision_at_5", "value": 11.830916666666665}, {"type": "recall_at_1", "value": 26.510999999999996}, {"type": "recall_at_10", "value": 52.7825}, {"type": "recall_at_100", "value": 75.79675}, {"type": "recall_at_1000", "value": 91.10483333333335}, {"type": "recall_at_3", "value": 39.48233333333334}, {"type": "recall_at_5", "value": 45.07116666666667}, {"type": "map_at_1", "value": 24.564}, {"type": "map_at_10", "value": 31.235000000000003}, {"type": "map_at_100", "value": 32.124}, {"type": "map_at_1000", "value": 32.216}, {"type": "map_at_3", "value": 29.330000000000002}, {"type": "map_at_5", "value": 30.379}, {"type": "mrr_at_1", "value": 27.761000000000003}, {"type": "mrr_at_10", "value": 34.093}, {"type": "mrr_at_100", "value": 34.885}, {"type": "mrr_at_1000", "value": 34.957}, {"type": "mrr_at_3", "value": 32.388}, {"type": "mrr_at_5", "value": 33.269}, {"type": "ndcg_at_1", "value": 27.761000000000003}, {"type": "ndcg_at_10", "value": 35.146}, {"type": "ndcg_at_100", "value": 39.597}, {"type": "ndcg_at_1000", "value": 42.163000000000004}, {"type": "ndcg_at_3", "value": 31.674000000000003}, {"type": "ndcg_at_5", "value": 33.224}, {"type": "precision_at_1", "value": 27.761000000000003}, {"type": "precision_at_10", "value": 5.383}, {"type": "precision_at_100", "value": 0.836}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 13.599}, {"type": "precision_at_5", "value": 9.202}, {"type": "recall_at_1", "value": 24.564}, {"type": "recall_at_10", "value": 44.36}, {"type": "recall_at_100", "value": 64.408}, {"type": "recall_at_1000", "value": 83.892}, {"type": "recall_at_3", "value": 34.653}, {"type": "recall_at_5", "value": 38.589}, {"type": "map_at_1", "value": 17.01}, {"type": "map_at_10", "value": 24.485}, {"type": "map_at_100", "value": 25.573}, {"type": "map_at_1000", "value": 25.703}, {"type": "map_at_3", "value": 21.953}, {"type": "map_at_5", "value": 23.294999999999998}, {"type": "mrr_at_1", "value": 20.544}, {"type": "mrr_at_10", "value": 28.238000000000003}, {"type": "mrr_at_100", "value": 29.142000000000003}, {"type": "mrr_at_1000", "value": 29.219}, {"type": "mrr_at_3", "value": 25.802999999999997}, {"type": "mrr_at_5", "value": 27.105}, {"type": "ndcg_at_1", "value": 20.544}, {"type": "ndcg_at_10", "value": 29.387999999999998}, {"type": "ndcg_at_100", "value": 34.603}, {"type": "ndcg_at_1000", "value": 37.564}, {"type": "ndcg_at_3", "value": 24.731}, {"type": "ndcg_at_5", "value": 26.773000000000003}, {"type": "precision_at_1", "value": 20.544}, {"type": "precision_at_10", "value": 5.509}, {"type": "precision_at_100", "value": 0.9450000000000001}, {"type": "precision_at_1000", "value": 0.13799999999999998}, {"type": "precision_at_3", "value": 11.757}, {"type": "precision_at_5", "value": 8.596}, {"type": "recall_at_1", "value": 17.01}, {"type": "recall_at_10", "value": 40.392}, {"type": "recall_at_100", "value": 64.043}, {"type": "recall_at_1000", "value": 85.031}, {"type": "recall_at_3", "value": 27.293}, {"type": "recall_at_5", "value": 32.586999999999996}, {"type": "map_at_1", "value": 27.155}, {"type": "map_at_10", "value": 35.92}, {"type": "map_at_100", "value": 37.034}, {"type": "map_at_1000", "value": 37.139}, {"type": "map_at_3", "value": 33.263999999999996}, {"type": "map_at_5", "value": 34.61}, {"type": "mrr_at_1", "value": 32.183}, {"type": "mrr_at_10", "value": 40.099000000000004}, {"type": "mrr_at_100", "value": 41.001}, {"type": "mrr_at_1000", "value": 41.059}, {"type": "mrr_at_3", "value": 37.889}, {"type": "mrr_at_5", "value": 39.007999999999996}, {"type": "ndcg_at_1", "value": 32.183}, {"type": "ndcg_at_10", "value": 41.127}, {"type": "ndcg_at_100", "value": 46.464}, {"type": "ndcg_at_1000", "value": 48.67}, {"type": "ndcg_at_3", "value": 36.396}, {"type": "ndcg_at_5", "value": 38.313}, {"type": "precision_at_1", "value": 32.183}, {"type": "precision_at_10", "value": 6.847}, {"type": "precision_at_100", "value": 1.0739999999999998}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 16.356}, {"type": "precision_at_5", "value": 11.362}, {"type": "recall_at_1", "value": 27.155}, {"type": "recall_at_10", "value": 52.922000000000004}, {"type": "recall_at_100", "value": 76.39}, {"type": "recall_at_1000", "value": 91.553}, {"type": "recall_at_3", "value": 39.745999999999995}, {"type": "recall_at_5", "value": 44.637}, {"type": "map_at_1", "value": 25.523}, {"type": "map_at_10", "value": 34.268}, {"type": "map_at_100", "value": 35.835}, {"type": "map_at_1000", "value": 36.046}, {"type": "map_at_3", "value": 31.662000000000003}, {"type": "map_at_5", "value": 32.71}, {"type": "mrr_at_1", "value": 31.028}, {"type": "mrr_at_10", "value": 38.924}, {"type": "mrr_at_100", "value": 39.95}, {"type": "mrr_at_1000", "value": 40.003}, {"type": "mrr_at_3", "value": 36.594}, {"type": "mrr_at_5", "value": 37.701}, {"type": "ndcg_at_1", "value": 31.028}, {"type": "ndcg_at_10", "value": 39.848}, {"type": "ndcg_at_100", "value": 45.721000000000004}, {"type": "ndcg_at_1000", "value": 48.424}, {"type": "ndcg_at_3", "value": 35.329}, {"type": "ndcg_at_5", "value": 36.779}, {"type": "precision_at_1", "value": 31.028}, {"type": "precision_at_10", "value": 7.51}, {"type": "precision_at_100", "value": 1.478}, {"type": "precision_at_1000", "value": 0.24}, {"type": "precision_at_3", "value": 16.337}, {"type": "precision_at_5", "value": 11.383000000000001}, {"type": "recall_at_1", "value": 25.523}, {"type": "recall_at_10", "value": 50.735}, {"type": "recall_at_100", "value": 76.593}, {"type": "recall_at_1000", "value": 93.771}, {"type": "recall_at_3", "value": 37.574000000000005}, {"type": "recall_at_5", "value": 41.602}, {"type": "map_at_1", "value": 20.746000000000002}, {"type": "map_at_10", "value": 28.557}, {"type": "map_at_100", "value": 29.575000000000003}, {"type": "map_at_1000", "value": 29.659000000000002}, {"type": "map_at_3", "value": 25.753999999999998}, {"type": "map_at_5", "value": 27.254}, {"type": "mrr_at_1", "value": 22.736}, {"type": "mrr_at_10", "value": 30.769000000000002}, {"type": "mrr_at_100", "value": 31.655}, {"type": "mrr_at_1000", "value": 31.717000000000002}, {"type": "mrr_at_3", "value": 28.065}, {"type": "mrr_at_5", "value": 29.543999999999997}, {"type": "ndcg_at_1", "value": 22.736}, {"type": "ndcg_at_10", "value": 33.545}, {"type": "ndcg_at_100", "value": 38.743}, {"type": "ndcg_at_1000", "value": 41.002}, {"type": "ndcg_at_3", "value": 28.021}, {"type": "ndcg_at_5", "value": 30.586999999999996}, {"type": "precision_at_1", "value": 22.736}, {"type": "precision_at_10", "value": 5.416}, {"type": "precision_at_100", "value": 0.8710000000000001}, {"type": "precision_at_1000", "value": 0.116}, {"type": "precision_at_3", "value": 11.953}, {"type": "precision_at_5", "value": 8.651}, {"type": "recall_at_1", "value": 20.746000000000002}, {"type": "recall_at_10", "value": 46.87}, {"type": "recall_at_100", "value": 71.25200000000001}, {"type": "recall_at_1000", "value": 88.26}, {"type": "recall_at_3", "value": 32.029999999999994}, {"type": "recall_at_5", "value": 38.21}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 12.105}, {"type": "map_at_10", "value": 20.577}, {"type": "map_at_100", "value": 22.686999999999998}, {"type": "map_at_1000", "value": 22.889}, {"type": "map_at_3", "value": 17.174}, {"type": "map_at_5", "value": 18.807}, {"type": "mrr_at_1", "value": 27.101}, {"type": "mrr_at_10", "value": 38.475}, {"type": "mrr_at_100", "value": 39.491}, {"type": "mrr_at_1000", "value": 39.525}, {"type": "mrr_at_3", "value": 34.886}, {"type": "mrr_at_5", "value": 36.922}, {"type": "ndcg_at_1", "value": 27.101}, {"type": "ndcg_at_10", "value": 29.002}, {"type": "ndcg_at_100", "value": 37.218}, {"type": "ndcg_at_1000", "value": 40.644000000000005}, {"type": "ndcg_at_3", "value": 23.464}, {"type": "ndcg_at_5", "value": 25.262}, {"type": "precision_at_1", "value": 27.101}, {"type": "precision_at_10", "value": 9.179}, {"type": "precision_at_100", "value": 1.806}, {"type": "precision_at_1000", "value": 0.244}, {"type": "precision_at_3", "value": 17.394000000000002}, {"type": "precision_at_5", "value": 13.342}, {"type": "recall_at_1", "value": 12.105}, {"type": "recall_at_10", "value": 35.143}, {"type": "recall_at_100", "value": 63.44499999999999}, {"type": "recall_at_1000", "value": 82.49499999999999}, {"type": "recall_at_3", "value": 21.489}, {"type": "recall_at_5", "value": 26.82}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.769}, {"type": "map_at_10", "value": 18.619}, {"type": "map_at_100", "value": 26.3}, {"type": "map_at_1000", "value": 28.063}, {"type": "map_at_3", "value": 13.746}, {"type": "map_at_5", "value": 16.035}, {"type": "mrr_at_1", "value": 65.25}, {"type": "mrr_at_10", "value": 73.678}, {"type": "mrr_at_100", "value": 73.993}, {"type": "mrr_at_1000", "value": 74.003}, {"type": "mrr_at_3", "value": 72.042}, {"type": "mrr_at_5", "value": 72.992}, {"type": "ndcg_at_1", "value": 53.625}, {"type": "ndcg_at_10", "value": 39.638}, {"type": "ndcg_at_100", "value": 44.601}, {"type": "ndcg_at_1000", "value": 52.80200000000001}, {"type": "ndcg_at_3", "value": 44.727}, {"type": "ndcg_at_5", "value": 42.199}, {"type": "precision_at_1", "value": 65.25}, {"type": "precision_at_10", "value": 31.025000000000002}, {"type": "precision_at_100", "value": 10.174999999999999}, {"type": "precision_at_1000", "value": 2.0740000000000003}, {"type": "precision_at_3", "value": 48.083}, {"type": "precision_at_5", "value": 40.6}, {"type": "recall_at_1", "value": 8.769}, {"type": "recall_at_10", "value": 23.910999999999998}, {"type": "recall_at_100", "value": 51.202999999999996}, {"type": "recall_at_1000", "value": 77.031}, {"type": "recall_at_3", "value": 15.387999999999998}, {"type": "recall_at_5", "value": 18.919}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 54.47}, {"type": "f1", "value": 48.21839043361556}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 63.564}, {"type": "map_at_10", "value": 74.236}, {"type": "map_at_100", "value": 74.53699999999999}, {"type": "map_at_1000", "value": 74.557}, {"type": "map_at_3", "value": 72.556}, {"type": "map_at_5", "value": 73.656}, {"type": "mrr_at_1", "value": 68.497}, {"type": "mrr_at_10", "value": 78.373}, {"type": "mrr_at_100", "value": 78.54299999999999}, {"type": "mrr_at_1000", "value": 78.549}, {"type": "mrr_at_3", "value": 77.03}, {"type": "mrr_at_5", "value": 77.938}, {"type": "ndcg_at_1", "value": 68.497}, {"type": "ndcg_at_10", "value": 79.12599999999999}, {"type": "ndcg_at_100", "value": 80.319}, {"type": "ndcg_at_1000", "value": 80.71199999999999}, {"type": "ndcg_at_3", "value": 76.209}, {"type": "ndcg_at_5", "value": 77.90700000000001}, {"type": "precision_at_1", "value": 68.497}, {"type": "precision_at_10", "value": 9.958}, {"type": "precision_at_100", "value": 1.077}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 29.908}, {"type": "precision_at_5", "value": 18.971}, {"type": "recall_at_1", "value": 63.564}, {"type": "recall_at_10", "value": 90.05199999999999}, {"type": "recall_at_100", "value": 95.028}, {"type": "recall_at_1000", "value": 97.667}, {"type": "recall_at_3", "value": 82.17999999999999}, {"type": "recall_at_5", "value": 86.388}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.042}, {"type": "map_at_10", "value": 30.764999999999997}, {"type": "map_at_100", "value": 32.678000000000004}, {"type": "map_at_1000", "value": 32.881}, {"type": "map_at_3", "value": 26.525}, {"type": "map_at_5", "value": 28.932000000000002}, {"type": "mrr_at_1", "value": 37.653999999999996}, {"type": "mrr_at_10", "value": 46.597}, {"type": "mrr_at_100", "value": 47.413}, {"type": "mrr_at_1000", "value": 47.453}, {"type": "mrr_at_3", "value": 43.775999999999996}, {"type": "mrr_at_5", "value": 45.489000000000004}, {"type": "ndcg_at_1", "value": 37.653999999999996}, {"type": "ndcg_at_10", "value": 38.615}, {"type": "ndcg_at_100", "value": 45.513999999999996}, {"type": "ndcg_at_1000", "value": 48.815999999999995}, {"type": "ndcg_at_3", "value": 34.427}, {"type": "ndcg_at_5", "value": 35.954}, {"type": "precision_at_1", "value": 37.653999999999996}, {"type": "precision_at_10", "value": 10.864}, {"type": "precision_at_100", "value": 1.7850000000000001}, {"type": "precision_at_1000", "value": 0.23800000000000002}, {"type": "precision_at_3", "value": 22.788}, {"type": "precision_at_5", "value": 17.346}, {"type": "recall_at_1", "value": 19.042}, {"type": "recall_at_10", "value": 45.707}, {"type": "recall_at_100", "value": 71.152}, {"type": "recall_at_1000", "value": 90.7}, {"type": "recall_at_3", "value": 30.814000000000004}, {"type": "recall_at_5", "value": 37.478}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 38.001000000000005}, {"type": "map_at_10", "value": 59.611000000000004}, {"type": "map_at_100", "value": 60.582}, {"type": "map_at_1000", "value": 60.646}, {"type": "map_at_3", "value": 56.031}, {"type": "map_at_5", "value": 58.243}, {"type": "mrr_at_1", "value": 76.003}, {"type": "mrr_at_10", "value": 82.15400000000001}, {"type": "mrr_at_100", "value": 82.377}, {"type": "mrr_at_1000", "value": 82.383}, {"type": "mrr_at_3", "value": 81.092}, {"type": "mrr_at_5", "value": 81.742}, {"type": "ndcg_at_1", "value": 76.003}, {"type": "ndcg_at_10", "value": 68.216}, {"type": "ndcg_at_100", "value": 71.601}, {"type": "ndcg_at_1000", "value": 72.821}, {"type": "ndcg_at_3", "value": 63.109}, {"type": "ndcg_at_5", "value": 65.902}, {"type": "precision_at_1", "value": 76.003}, {"type": "precision_at_10", "value": 14.379}, {"type": "precision_at_100", "value": 1.702}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 40.396}, {"type": "precision_at_5", "value": 26.442}, {"type": "recall_at_1", "value": 38.001000000000005}, {"type": "recall_at_10", "value": 71.897}, {"type": "recall_at_100", "value": 85.105}, {"type": "recall_at_1000", "value": 93.133}, {"type": "recall_at_3", "value": 60.594}, {"type": "recall_at_5", "value": 66.104}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 91.31280000000001}, {"type": "ap", "value": 87.53723467501632}, {"type": "f1", "value": 91.30282906596291}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.917}, {"type": "map_at_10", "value": 34.117999999999995}, {"type": "map_at_100", "value": 35.283}, {"type": "map_at_1000", "value": 35.333999999999996}, {"type": "map_at_3", "value": 30.330000000000002}, {"type": "map_at_5", "value": 32.461}, {"type": "mrr_at_1", "value": 22.579}, {"type": "mrr_at_10", "value": 34.794000000000004}, {"type": "mrr_at_100", "value": 35.893}, {"type": "mrr_at_1000", "value": 35.937000000000005}, {"type": "mrr_at_3", "value": 31.091}, {"type": "mrr_at_5", "value": 33.173}, {"type": "ndcg_at_1", "value": 22.579}, {"type": "ndcg_at_10", "value": 40.951}, {"type": "ndcg_at_100", "value": 46.558}, {"type": "ndcg_at_1000", "value": 47.803000000000004}, {"type": "ndcg_at_3", "value": 33.262}, {"type": "ndcg_at_5", "value": 37.036}, {"type": "precision_at_1", "value": 22.579}, {"type": "precision_at_10", "value": 6.463000000000001}, {"type": "precision_at_100", "value": 0.928}, {"type": "precision_at_1000", "value": 0.104}, {"type": "precision_at_3", "value": 14.174000000000001}, {"type": "precision_at_5", "value": 10.421}, {"type": "recall_at_1", "value": 21.917}, {"type": "recall_at_10", "value": 61.885}, {"type": "recall_at_100", "value": 87.847}, {"type": "recall_at_1000", "value": 97.322}, {"type": "recall_at_3", "value": 41.010000000000005}, {"type": "recall_at_5", "value": 50.031000000000006}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 93.49521203830369}, {"type": "f1", "value": 93.30882341740241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 71.0579115367077}, {"type": "f1", "value": 51.2368258319339}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 73.88029589778077}, {"type": "f1", "value": 72.34422048584663}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 78.2817753866846}, {"type": "f1", "value": 77.87746050004304}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 33.247341454119216}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 31.9647477166234}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 31.90698374676892}, {"type": "mrr", "value": 33.07523683771251}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 6.717}, {"type": "map_at_10", "value": 14.566}, {"type": "map_at_100", "value": 18.465999999999998}, {"type": "map_at_1000", "value": 20.033}, {"type": "map_at_3", "value": 10.863}, {"type": "map_at_5", "value": 12.589}, {"type": "mrr_at_1", "value": 49.845}, {"type": "mrr_at_10", "value": 58.385}, {"type": "mrr_at_100", "value": 58.989999999999995}, {"type": "mrr_at_1000", "value": 59.028999999999996}, {"type": "mrr_at_3", "value": 56.76}, {"type": "mrr_at_5", "value": 57.766}, {"type": "ndcg_at_1", "value": 47.678}, {"type": "ndcg_at_10", "value": 37.511}, {"type": "ndcg_at_100", "value": 34.537}, {"type": "ndcg_at_1000", "value": 43.612}, {"type": "ndcg_at_3", "value": 43.713}, {"type": "ndcg_at_5", "value": 41.303}, {"type": "precision_at_1", "value": 49.845}, {"type": "precision_at_10", "value": 27.307}, {"type": "precision_at_100", "value": 8.746}, {"type": "precision_at_1000", "value": 2.182}, {"type": "precision_at_3", "value": 40.764}, {"type": "precision_at_5", "value": 35.232}, {"type": "recall_at_1", "value": 6.717}, {"type": "recall_at_10", "value": 18.107}, {"type": "recall_at_100", "value": 33.759}, {"type": "recall_at_1000", "value": 67.31}, {"type": "recall_at_3", "value": 11.68}, {"type": "recall_at_5", "value": 14.557999999999998}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 27.633999999999997}, {"type": "map_at_10", "value": 42.400999999999996}, {"type": "map_at_100", "value": 43.561}, {"type": "map_at_1000", "value": 43.592}, {"type": "map_at_3", "value": 37.865}, {"type": "map_at_5", "value": 40.650999999999996}, {"type": "mrr_at_1", "value": 31.286}, {"type": "mrr_at_10", "value": 44.996}, {"type": "mrr_at_100", "value": 45.889}, {"type": "mrr_at_1000", "value": 45.911}, {"type": "mrr_at_3", "value": 41.126000000000005}, {"type": "mrr_at_5", "value": 43.536}, {"type": "ndcg_at_1", "value": 31.257}, {"type": "ndcg_at_10", "value": 50.197}, {"type": "ndcg_at_100", "value": 55.062}, {"type": "ndcg_at_1000", "value": 55.81700000000001}, {"type": "ndcg_at_3", "value": 41.650999999999996}, {"type": "ndcg_at_5", "value": 46.324}, {"type": "precision_at_1", "value": 31.257}, {"type": "precision_at_10", "value": 8.508000000000001}, {"type": "precision_at_100", "value": 1.121}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 19.1}, {"type": "precision_at_5", "value": 14.16}, {"type": "recall_at_1", "value": 27.633999999999997}, {"type": "recall_at_10", "value": 71.40100000000001}, {"type": "recall_at_100", "value": 92.463}, {"type": "recall_at_1000", "value": 98.13199999999999}, {"type": "recall_at_3", "value": 49.382}, {"type": "recall_at_5", "value": 60.144}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 71.17099999999999}, {"type": "map_at_10", "value": 85.036}, {"type": "map_at_100", "value": 85.67099999999999}, {"type": "map_at_1000", "value": 85.68599999999999}, {"type": "map_at_3", "value": 82.086}, {"type": "map_at_5", "value": 83.956}, {"type": "mrr_at_1", "value": 82.04}, {"type": "mrr_at_10", "value": 88.018}, {"type": "mrr_at_100", "value": 88.114}, {"type": "mrr_at_1000", "value": 88.115}, {"type": "mrr_at_3", "value": 87.047}, {"type": "mrr_at_5", "value": 87.73100000000001}, {"type": "ndcg_at_1", "value": 82.03}, {"type": "ndcg_at_10", "value": 88.717}, {"type": "ndcg_at_100", "value": 89.904}, {"type": "ndcg_at_1000", "value": 89.991}, {"type": "ndcg_at_3", "value": 85.89099999999999}, {"type": "ndcg_at_5", "value": 87.485}, {"type": "precision_at_1", "value": 82.03}, {"type": "precision_at_10", "value": 13.444999999999999}, {"type": "precision_at_100", "value": 1.533}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.537}, {"type": "precision_at_5", "value": 24.692}, {"type": "recall_at_1", "value": 71.17099999999999}, {"type": "recall_at_10", "value": 95.634}, {"type": "recall_at_100", "value": 99.614}, {"type": "recall_at_1000", "value": 99.99}, {"type": "recall_at_3", "value": 87.48}, {"type": "recall_at_5", "value": 91.996}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 55.067219624685315}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 62.121822992300444}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.153}, {"type": "map_at_10", "value": 11.024000000000001}, {"type": "map_at_100", "value": 13.233}, {"type": "map_at_1000", "value": 13.62}, {"type": "map_at_3", "value": 7.779999999999999}, {"type": "map_at_5", "value": 9.529}, {"type": "mrr_at_1", "value": 20.599999999999998}, {"type": "mrr_at_10", "value": 31.361}, {"type": "mrr_at_100", "value": 32.738}, {"type": "mrr_at_1000", "value": 32.792}, {"type": "mrr_at_3", "value": 28.15}, {"type": "mrr_at_5", "value": 30.085}, {"type": "ndcg_at_1", "value": 20.599999999999998}, {"type": "ndcg_at_10", "value": 18.583}, {"type": "ndcg_at_100", "value": 27.590999999999998}, {"type": "ndcg_at_1000", "value": 34.001}, {"type": "ndcg_at_3", "value": 17.455000000000002}, {"type": "ndcg_at_5", "value": 15.588}, {"type": "precision_at_1", "value": 20.599999999999998}, {"type": "precision_at_10", "value": 9.74}, {"type": "precision_at_100", "value": 2.284}, {"type": "precision_at_1000", "value": 0.381}, {"type": "precision_at_3", "value": 16.533}, {"type": "precision_at_5", "value": 14.02}, {"type": "recall_at_1", "value": 4.153}, {"type": "recall_at_10", "value": 19.738}, {"type": "recall_at_100", "value": 46.322}, {"type": "recall_at_1000", "value": 77.378}, {"type": "recall_at_3", "value": 10.048}, {"type": "recall_at_5", "value": 14.233}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.07097501003639}, {"type": "cos_sim_spearman", "value": 81.05827848407056}, {"type": "euclidean_pearson", "value": 82.6279003372546}, {"type": "euclidean_spearman", "value": 81.00031515279802}, {"type": "manhattan_pearson", "value": 82.59338284959495}, {"type": "manhattan_spearman", "value": 80.97432711064945}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.28991993621685}, {"type": "cos_sim_spearman", "value": 78.71828082424351}, {"type": "euclidean_pearson", "value": 83.4881331520832}, {"type": "euclidean_spearman", "value": 78.51746826842316}, {"type": "manhattan_pearson", "value": 83.4109223774324}, {"type": "manhattan_spearman", "value": 78.431544382179}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.16651661072123}, {"type": "cos_sim_spearman", "value": 84.88094386637867}, {"type": "euclidean_pearson", "value": 84.3547603585416}, {"type": "euclidean_spearman", "value": 84.85148665860193}, {"type": "manhattan_pearson", "value": 84.29648369879266}, {"type": "manhattan_spearman", "value": 84.76074870571124}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.40596254292149}, {"type": "cos_sim_spearman", "value": 83.10699573133829}, {"type": "euclidean_pearson", "value": 83.22794776876958}, {"type": "euclidean_spearman", "value": 83.22583316084712}, {"type": "manhattan_pearson", "value": 83.15899233935681}, {"type": "manhattan_spearman", "value": 83.17668293648019}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.27977121352563}, {"type": "cos_sim_spearman", "value": 88.73903130248591}, {"type": "euclidean_pearson", "value": 88.30685958438735}, {"type": "euclidean_spearman", "value": 88.79755484280406}, {"type": "manhattan_pearson", "value": 88.30305607758652}, {"type": "manhattan_spearman", "value": 88.80096577072784}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.08819031430218}, {"type": "cos_sim_spearman", "value": 86.35414445951125}, {"type": "euclidean_pearson", "value": 85.4683192388315}, {"type": "euclidean_spearman", "value": 86.2079674669473}, {"type": "manhattan_pearson", "value": 85.35835702257341}, {"type": "manhattan_spearman", "value": 86.08483380002187}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.36149449801478}, {"type": "cos_sim_spearman", "value": 87.7102980757725}, {"type": "euclidean_pearson", "value": 88.16457177837161}, {"type": "euclidean_spearman", "value": 87.6598652482716}, {"type": "manhattan_pearson", "value": 88.23894728971618}, {"type": "manhattan_spearman", "value": 87.74470156709361}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 64.54023758394433}, {"type": "cos_sim_spearman", "value": 66.28491960187773}, {"type": "euclidean_pearson", "value": 67.0853128483472}, {"type": "euclidean_spearman", "value": 66.10307543766307}, {"type": "manhattan_pearson", "value": 66.7635365592556}, {"type": "manhattan_spearman", "value": 65.76408004780167}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.15858398195317}, {"type": "cos_sim_spearman", "value": 87.44850004752102}, {"type": "euclidean_pearson", "value": 86.60737082550408}, {"type": "euclidean_spearman", "value": 87.31591549824242}, {"type": "manhattan_pearson", "value": 86.56187011429977}, {"type": "manhattan_spearman", "value": 87.23854795795319}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.66210488769109}, {"type": "mrr", "value": 96.23100664767331}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 56.094}, {"type": "map_at_10", "value": 67.486}, {"type": "map_at_100", "value": 67.925}, {"type": "map_at_1000", "value": 67.949}, {"type": "map_at_3", "value": 64.857}, {"type": "map_at_5", "value": 66.31}, {"type": "mrr_at_1", "value": 58.667}, {"type": "mrr_at_10", "value": 68.438}, {"type": "mrr_at_100", "value": 68.733}, {"type": "mrr_at_1000", "value": 68.757}, {"type": "mrr_at_3", "value": 66.389}, {"type": "mrr_at_5", "value": 67.456}, {"type": "ndcg_at_1", "value": 58.667}, {"type": "ndcg_at_10", "value": 72.506}, {"type": "ndcg_at_100", "value": 74.27}, {"type": "ndcg_at_1000", "value": 74.94800000000001}, {"type": "ndcg_at_3", "value": 67.977}, {"type": "ndcg_at_5", "value": 70.028}, {"type": "precision_at_1", "value": 58.667}, {"type": "precision_at_10", "value": 9.767000000000001}, {"type": "precision_at_100", "value": 1.073}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 27.0}, {"type": "precision_at_5", "value": 17.666999999999998}, {"type": "recall_at_1", "value": 56.094}, {"type": "recall_at_10", "value": 86.68900000000001}, {"type": "recall_at_100", "value": 94.333}, {"type": "recall_at_1000", "value": 99.667}, {"type": "recall_at_3", "value": 74.522}, {"type": "recall_at_5", "value": 79.611}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.83069306930693}, {"type": "cos_sim_ap", "value": 95.69184662911199}, {"type": "cos_sim_f1", "value": 91.4027149321267}, {"type": "cos_sim_precision", "value": 91.91102123356926}, {"type": "cos_sim_recall", "value": 90.9}, {"type": "dot_accuracy", "value": 99.69405940594059}, {"type": "dot_ap", "value": 90.21674151456216}, {"type": "dot_f1", "value": 84.4489179667841}, {"type": "dot_precision", "value": 85.00506585612969}, {"type": "dot_recall", "value": 83.89999999999999}, {"type": "euclidean_accuracy", "value": 99.83069306930693}, {"type": "euclidean_ap", "value": 95.67760109671087}, {"type": "euclidean_f1", "value": 91.19754350051177}, {"type": "euclidean_precision", "value": 93.39622641509435}, {"type": "euclidean_recall", "value": 89.1}, {"type": "manhattan_accuracy", "value": 99.83267326732673}, {"type": "manhattan_ap", "value": 95.69771347732625}, {"type": "manhattan_f1", "value": 91.32420091324201}, {"type": "manhattan_precision", "value": 92.68795056642637}, {"type": "manhattan_recall", "value": 90.0}, {"type": "max_accuracy", "value": 99.83267326732673}, {"type": "max_ap", "value": 95.69771347732625}, {"type": "max_f1", "value": 91.4027149321267}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 64.47378332953092}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 33.79602531604151}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 53.80707639107175}, {"type": "mrr", "value": 54.64886522790935}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.852448373051395}, {"type": "cos_sim_spearman", "value": 32.51821499493775}, {"type": "dot_pearson", "value": 30.390650062190456}, {"type": "dot_spearman", "value": 30.588836159667636}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.198}, {"type": "map_at_10", "value": 1.51}, {"type": "map_at_100", "value": 8.882}, {"type": "map_at_1000", "value": 22.181}, {"type": "map_at_3", "value": 0.553}, {"type": "map_at_5", "value": 0.843}, {"type": "mrr_at_1", "value": 74.0}, {"type": "mrr_at_10", "value": 84.89999999999999}, {"type": "mrr_at_100", "value": 84.89999999999999}, {"type": "mrr_at_1000", "value": 84.89999999999999}, {"type": "mrr_at_3", "value": 84.0}, {"type": "mrr_at_5", "value": 84.89999999999999}, {"type": "ndcg_at_1", "value": 68.0}, {"type": "ndcg_at_10", "value": 64.792}, {"type": "ndcg_at_100", "value": 51.37199999999999}, {"type": "ndcg_at_1000", "value": 47.392}, {"type": "ndcg_at_3", "value": 68.46900000000001}, {"type": "ndcg_at_5", "value": 67.084}, {"type": "precision_at_1", "value": 74.0}, {"type": "precision_at_10", "value": 69.39999999999999}, {"type": "precision_at_100", "value": 53.080000000000005}, {"type": "precision_at_1000", "value": 21.258}, {"type": "precision_at_3", "value": 76.0}, {"type": "precision_at_5", "value": 73.2}, {"type": "recall_at_1", "value": 0.198}, {"type": "recall_at_10", "value": 1.7950000000000002}, {"type": "recall_at_100", "value": 12.626999999999999}, {"type": "recall_at_1000", "value": 44.84}, {"type": "recall_at_3", "value": 0.611}, {"type": "recall_at_5", "value": 0.959}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.4949999999999999}, {"type": "map_at_10", "value": 8.797}, {"type": "map_at_100", "value": 14.889}, {"type": "map_at_1000", "value": 16.309}, {"type": "map_at_3", "value": 4.389}, {"type": "map_at_5", "value": 6.776}, {"type": "mrr_at_1", "value": 18.367}, {"type": "mrr_at_10", "value": 35.844}, {"type": "mrr_at_100", "value": 37.119}, {"type": "mrr_at_1000", "value": 37.119}, {"type": "mrr_at_3", "value": 30.612000000000002}, {"type": "mrr_at_5", "value": 33.163}, {"type": "ndcg_at_1", "value": 16.326999999999998}, {"type": "ndcg_at_10", "value": 21.9}, {"type": "ndcg_at_100", "value": 34.705000000000005}, {"type": "ndcg_at_1000", "value": 45.709}, {"type": "ndcg_at_3", "value": 22.7}, {"type": "ndcg_at_5", "value": 23.197000000000003}, {"type": "precision_at_1", "value": 18.367}, {"type": "precision_at_10", "value": 21.02}, {"type": "precision_at_100", "value": 7.714}, {"type": "precision_at_1000", "value": 1.504}, {"type": "precision_at_3", "value": 26.531}, {"type": "precision_at_5", "value": 26.122}, {"type": "recall_at_1", "value": 1.4949999999999999}, {"type": "recall_at_10", "value": 15.504000000000001}, {"type": "recall_at_100", "value": 47.978}, {"type": "recall_at_1000", "value": 81.56}, {"type": "recall_at_3", "value": 5.569}, {"type": "recall_at_5", "value": 9.821}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 72.99279999999999}, {"type": "ap", "value": 15.459189680101492}, {"type": "f1", "value": 56.33023271441895}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 63.070175438596486}, {"type": "f1", "value": 63.28070758709465}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 50.076231309703054}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 87.21463908922931}, {"type": "cos_sim_ap", "value": 77.67287017966282}, {"type": "cos_sim_f1", "value": 70.34412955465588}, {"type": "cos_sim_precision", "value": 67.57413709285368}, {"type": "cos_sim_recall", "value": 73.35092348284961}, {"type": "dot_accuracy", "value": 85.04500208618943}, {"type": "dot_ap", "value": 70.4075203869744}, {"type": "dot_f1", "value": 66.18172537008678}, {"type": "dot_precision", "value": 64.08798813643104}, {"type": "dot_recall", "value": 68.41688654353561}, {"type": "euclidean_accuracy", "value": 87.17887584192646}, {"type": "euclidean_ap", "value": 77.5774128274464}, {"type": "euclidean_f1", "value": 70.09307972480777}, {"type": "euclidean_precision", "value": 71.70852884349986}, {"type": "euclidean_recall", "value": 68.54881266490766}, {"type": "manhattan_accuracy", "value": 87.28020504261787}, {"type": "manhattan_ap", "value": 77.57835820297892}, {"type": "manhattan_f1", "value": 70.23063591521131}, {"type": "manhattan_precision", "value": 70.97817299919159}, {"type": "manhattan_recall", "value": 69.49868073878628}, {"type": "max_accuracy", "value": 87.28020504261787}, {"type": "max_ap", "value": 77.67287017966282}, {"type": "max_f1", "value": 70.34412955465588}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.96650754841464}, {"type": "cos_sim_ap", "value": 86.00185968965064}, {"type": "cos_sim_f1", "value": 77.95861256351718}, {"type": "cos_sim_precision", "value": 74.70712773465067}, {"type": "cos_sim_recall", "value": 81.50600554357868}, {"type": "dot_accuracy", "value": 87.36950362867233}, {"type": "dot_ap", "value": 82.22071181147555}, {"type": "dot_f1", "value": 74.85680716698488}, {"type": "dot_precision", "value": 71.54688377316114}, {"type": "dot_recall", "value": 78.48783492454572}, {"type": "euclidean_accuracy", "value": 88.99561454573679}, {"type": "euclidean_ap", "value": 86.15882097229648}, {"type": "euclidean_f1", "value": 78.18463125322332}, {"type": "euclidean_precision", "value": 74.95408956067241}, {"type": "euclidean_recall", "value": 81.70619032953496}, {"type": "manhattan_accuracy", "value": 88.96650754841464}, {"type": "manhattan_ap", "value": 86.13133111232099}, {"type": "manhattan_f1", "value": 78.10771470160115}, {"type": "manhattan_precision", "value": 74.05465084184377}, {"type": "manhattan_recall", "value": 82.63012011087157}, {"type": "max_accuracy", "value": 88.99561454573679}, {"type": "max_ap", "value": 86.15882097229648}, {"type": "max_f1", "value": 78.18463125322332}]}]}]} | jncraton/stella-base-en-v2-ct2-int8 | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"arxiv:1612.00796",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:01:20+00:00 | [
"1612.00796"
] | [
"en"
] | TAGS
#sentence-transformers #feature-extraction #sentence-similarity #mteb #en #arxiv-1612.00796 #license-mit #model-index #endpoints_compatible #region-us
| 新闻 | News
[2024-04-06] 开源puff系列模型,专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语。
[2024-02-27] 开源stella-mrl-large-zh-v3.5-1792d模型,支持向量可变维度。
[2024-02-17] 开源stella v3系列、dialogue编码模型和相关训练数据。
[2023-10-19] 开源stella-base-en-v2 使用简单,不需要任何前缀文本。
[2023-10-12] 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,不需要任何前缀文本。
[2023-09-11] 开源stella-base-zh和stella-large-zh
欢迎去本人主页查看最新模型,并提出您的宝贵意见!
stella model
------------
stella是一个通用的文本编码模型,主要有以下模型:
完整的训练思路和训练过程已记录在博客1和博客2,欢迎阅读讨论。
训练数据:
1. 开源数据(wudao\_base\_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本
2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据
训练方法:
1. 对比学习损失函数
2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例)
3. EWC(Elastic Weights Consolidation)[4]
4. cosent loss[5]
5. 每一种类型的数据一个迭代器,分别计算loss进行更新
stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction(
比如piccolo的'查询:', '结果:', e5的'query:'和'passage:')。
初始权重:
stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position
embedding使用层次分解位置编码[7]进行初始化。
感谢商汤科技研究院开源的piccolo系列模型。
stella is a general-purpose text encoder, which mainly includes the following models:
The training data mainly includes:
1. Open-source training data (wudao\_base\_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater
than 512.
2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM.
The loss functions mainly include:
1. Contrastive learning loss function
2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives)
3. EWC (Elastic Weights Consolidation)
4. cosent loss
Model weight initialization:
stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the
512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding.
Training strategy:
One iterator for each type of data, separately calculating the loss.
Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation.
Metric
------
#### C-MTEB leaderboard (Chinese)
#### MTEB leaderboard (English)
#### Reproduce our results
C-MTEB:
MTEB:
You can use official script to reproduce our result. scripts/run\_mteb\_english.py
#### Evaluation for long text
经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,
更致命的是那些长度大于512的文本,其重点都在前半部分
这里以CMRC2018的数据为例说明这个问题:
passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。
简言之,现有数据集的2个问题:
1)长度大于512的过少
2)即便大于512,对于检索而言也只需要前512的文本内容
导致无法准确评估模型的长文本编码能力。
为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:
* CMRC2018,通用百科
* CAIL,法律阅读理解
* DRCD,繁体百科,已转简体
* Military,军工问答
* Squad,英文阅读理解,已转中文
* Multifieldqa\_zh,清华的大模型长文本理解能力评测数据[9]
处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。
除了Military数据集,我们提供了其他5个测试数据的下载地址:URL
评测指标为Recall@5, 结果如下:
注意: 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。
Usage
-----
#### stella 中文系列模型
stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此用法和piccolo完全一致
,即在检索重排任务上给query和passage加上'查询: '和'结果: '。对于短短匹配不需要做任何操作。
stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,任何使用场景中都不需要加前缀文本。
stella中文系列模型均使用mean pooling做为文本向量。
在sentence-transformer库中的使用方法:
直接使用transformers库:
#### stella models for English
Using Sentence-Transformers:
Using HuggingFace Transformers:
Training Detail
---------------
硬件: 单卡A100-80GB
环境: torch1.13.\*; transformers-trainer + deepspeed + gradient-checkpointing
学习率: 1e-6
batch\_size: base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例
数据量: 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。
ToDoList
--------
评测的稳定性:
评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random\_seed,差距可以忽略不计,不影响评测结论。
更高质量的长文本训练和测试数据: 训练数据多是用13b模型构造的,肯定会存在噪声。
测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。
OOD的性能: 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,
它们的效果均比不上BM25。
Reference
---------
1. URL
2. URL
3. URL
4. URL
5. URL
6. URL
7. URL
8. URL
9. URL
| [
"#### C-MTEB leaderboard (Chinese)",
"#### MTEB leaderboard (English)",
"#### Reproduce our results\n\n\nC-MTEB:\n\n\nMTEB:\n\n\nYou can use official script to reproduce our result. scripts/run\\_mteb\\_english.py",
"#### Evaluation for long text\n\n\n经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,\n更致命的是那些长度大于512的文本,其重点都在前半部分\n这里以CMRC2018的数据为例说明这个问题:\n\n\npassage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。 \n\n简言之,现有数据集的2个问题: \n\n1)长度大于512的过少 \n\n2)即便大于512,对于检索而言也只需要前512的文本内容 \n\n导致无法准确评估模型的长文本编码能力。\n\n\n为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:\n\n\n* CMRC2018,通用百科\n* CAIL,法律阅读理解\n* DRCD,繁体百科,已转简体\n* Military,军工问答\n* Squad,英文阅读理解,已转中文\n* Multifieldqa\\_zh,清华的大模型长文本理解能力评测数据[9]\n\n\n处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。\n除了Military数据集,我们提供了其他5个测试数据的下载地址:URL\n\n\n评测指标为Recall@5, 结果如下:\n\n\n\n注意: 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。\n\n\nUsage\n-----",
"#### stella 中文系列模型\n\n\nstella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此用法和piccolo完全一致\n,即在检索重排任务上给query和passage加上'查询: '和'结果: '。对于短短匹配不需要做任何操作。\n\n\nstella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,任何使用场景中都不需要加前缀文本。\n\n\nstella中文系列模型均使用mean pooling做为文本向量。\n\n\n在sentence-transformer库中的使用方法:\n\n\n直接使用transformers库:",
"#### stella models for English\n\n\nUsing Sentence-Transformers:\n\n\nUsing HuggingFace Transformers:\n\n\nTraining Detail\n---------------\n\n\n硬件: 单卡A100-80GB\n\n\n环境: torch1.13.\\*; transformers-trainer + deepspeed + gradient-checkpointing\n\n\n学习率: 1e-6\n\n\nbatch\\_size: base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例\n\n\n数据量: 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。\n\n\nToDoList\n--------\n\n\n评测的稳定性:\n评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random\\_seed,差距可以忽略不计,不影响评测结论。\n\n\n更高质量的长文本训练和测试数据: 训练数据多是用13b模型构造的,肯定会存在噪声。\n测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。\n\n\nOOD的性能: 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,\n它们的效果均比不上BM25。\n\n\nReference\n---------\n\n\n1. URL\n2. URL\n3. URL\n4. URL\n5. URL\n6. URL\n7. URL\n8. URL\n9. URL"
] | [
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #mteb #en #arxiv-1612.00796 #license-mit #model-index #endpoints_compatible #region-us \n",
"#### C-MTEB leaderboard (Chinese)",
"#### MTEB leaderboard (English)",
"#### Reproduce our results\n\n\nC-MTEB:\n\n\nMTEB:\n\n\nYou can use official script to reproduce our result. scripts/run\\_mteb\\_english.py",
"#### Evaluation for long text\n\n\n经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,\n更致命的是那些长度大于512的文本,其重点都在前半部分\n这里以CMRC2018的数据为例说明这个问题:\n\n\npassage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。 \n\n简言之,现有数据集的2个问题: \n\n1)长度大于512的过少 \n\n2)即便大于512,对于检索而言也只需要前512的文本内容 \n\n导致无法准确评估模型的长文本编码能力。\n\n\n为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:\n\n\n* CMRC2018,通用百科\n* CAIL,法律阅读理解\n* DRCD,繁体百科,已转简体\n* Military,军工问答\n* Squad,英文阅读理解,已转中文\n* Multifieldqa\\_zh,清华的大模型长文本理解能力评测数据[9]\n\n\n处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。\n除了Military数据集,我们提供了其他5个测试数据的下载地址:URL\n\n\n评测指标为Recall@5, 结果如下:\n\n\n\n注意: 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。\n\n\nUsage\n-----",
"#### stella 中文系列模型\n\n\nstella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此用法和piccolo完全一致\n,即在检索重排任务上给query和passage加上'查询: '和'结果: '。对于短短匹配不需要做任何操作。\n\n\nstella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,任何使用场景中都不需要加前缀文本。\n\n\nstella中文系列模型均使用mean pooling做为文本向量。\n\n\n在sentence-transformer库中的使用方法:\n\n\n直接使用transformers库:",
"#### stella models for English\n\n\nUsing Sentence-Transformers:\n\n\nUsing HuggingFace Transformers:\n\n\nTraining Detail\n---------------\n\n\n硬件: 单卡A100-80GB\n\n\n环境: torch1.13.\\*; transformers-trainer + deepspeed + gradient-checkpointing\n\n\n学习率: 1e-6\n\n\nbatch\\_size: base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例\n\n\n数据量: 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。\n\n\nToDoList\n--------\n\n\n评测的稳定性:\n评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random\\_seed,差距可以忽略不计,不影响评测结论。\n\n\n更高质量的长文本训练和测试数据: 训练数据多是用13b模型构造的,肯定会存在噪声。\n测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。\n\n\nOOD的性能: 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,\n它们的效果均比不上BM25。\n\n\nReference\n---------\n\n\n1. URL\n2. URL\n3. URL\n4. URL\n5. URL\n6. URL\n7. URL\n8. URL\n9. URL"
] |
text-classification | transformers | ## Metrics
- loss: 0.6542
- accuracy: 0.8397
- precision: 0.8389
- recall: 0.8397
- precision_macro: 0.7972
- recall_macro: 0.7630
- macro_fpr: 0.0140
- weighted_fpr: 0.0135
- weighted_specificity: 0.9785
- macro_specificity: 0.9879
- weighted_sensitivity: 0.8397
- macro_sensitivity: 0.7630
- f1_micro: 0.8397
- f1_macro: 0.7711
- f1_weighted: 0.8376
- runtime: 23.8413
- samples_per_second: 54.1500
- steps_per_second: 6.7950
# UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050
UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [xshubhamx/InLegalBERT](https://huggingface.co/xshubhamx/InLegalBERT)
* [xshubhamx/legal-bert-base-uncased](https://huggingface.co/xshubhamx/legal-bert-base-uncased)
## 🧩 Configuration
```yaml
models:
- model: xshubhamx/InLegalBERT
parameters:
density: 0.53
weight: 0.5
- model: xshubhamx/legal-bert-base-uncased
parameters:
density: 0.53
weight: 0.5
merge_method: ties
base_model: xshubhamx/REAL-InLegalBERT
parameters:
normalize: true
int8_mask: true
dtype: float16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "xshubhamx/InLegalBERT", "xshubhamx/legal-bert-base-uncased"]} | xshubhamx/UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"merge",
"mergekit",
"lazymergekit",
"xshubhamx/InLegalBERT",
"xshubhamx/legal-bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:01:51+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ## Metrics
- loss: 0.6542
- accuracy: 0.8397
- precision: 0.8389
- recall: 0.8397
- precision_macro: 0.7972
- recall_macro: 0.7630
- macro_fpr: 0.0140
- weighted_fpr: 0.0135
- weighted_specificity: 0.9785
- macro_specificity: 0.9879
- weighted_sensitivity: 0.8397
- macro_sensitivity: 0.7630
- f1_micro: 0.8397
- f1_macro: 0.7711
- f1_weighted: 0.8376
- runtime: 23.8413
- samples_per_second: 54.1500
- steps_per_second: 6.7950
# UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050
UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:
* xshubhamx/InLegalBERT
* xshubhamx/legal-bert-base-uncased
## Configuration
| [
"## Metrics\n\n- loss: 0.6542\n- accuracy: 0.8397\n- precision: 0.8389\n- recall: 0.8397\n- precision_macro: 0.7972\n- recall_macro: 0.7630\n- macro_fpr: 0.0140\n- weighted_fpr: 0.0135\n- weighted_specificity: 0.9785\n- macro_specificity: 0.9879\n- weighted_sensitivity: 0.8397\n- macro_sensitivity: 0.7630\n- f1_micro: 0.8397\n- f1_macro: 0.7711\n- f1_weighted: 0.8376\n- runtime: 23.8413\n- samples_per_second: 54.1500\n- steps_per_second: 6.7950",
"# UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050\n\nUNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #merge #mergekit #lazymergekit #xshubhamx/InLegalBERT #xshubhamx/legal-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Metrics\n\n- loss: 0.6542\n- accuracy: 0.8397\n- precision: 0.8389\n- recall: 0.8397\n- precision_macro: 0.7972\n- recall_macro: 0.7630\n- macro_fpr: 0.0140\n- weighted_fpr: 0.0135\n- weighted_specificity: 0.9785\n- macro_specificity: 0.9879\n- weighted_sensitivity: 0.8397\n- macro_sensitivity: 0.7630\n- f1_micro: 0.8397\n- f1_macro: 0.7711\n- f1_weighted: 0.8376\n- runtime: 23.8413\n- samples_per_second: 54.1500\n- steps_per_second: 6.7950",
"# UNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050\n\nUNTRAINED-InLegal-InLegal-legal-merge-ties-d-053-w-050 is a merge of the following models using mergekit:\n* xshubhamx/InLegalBERT\n* xshubhamx/legal-bert-base-uncased",
"## Configuration"
] |
text-generation | transformers |
## Llamacpp Quantizations of OrpoLlama-3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/experimental">experimental</a> for quantization.
Original model: https://huggingface.co/mlabonne/OrpoLlama-3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OrpoLlama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [OrpoLlama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [OrpoLlama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [OrpoLlama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [OrpoLlama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [OrpoLlama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [OrpoLlama-3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [OrpoLlama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OrpoLlama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [OrpoLlama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [OrpoLlama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OrpoLlama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [OrpoLlama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [OrpoLlama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [OrpoLlama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [OrpoLlama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [OrpoLlama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [OrpoLlama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [OrpoLlama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [OrpoLlama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [OrpoLlama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [OrpoLlama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF/blob/main/OrpoLlama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["orpo", "llama 3", "rlhf", "sft"], "datasets": ["mlabonne/orpo-dpo-mix-40k"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"} | bartowski/OrpoLlama-3-8B-GGUF | null | [
"transformers",
"gguf",
"orpo",
"llama 3",
"rlhf",
"sft",
"text-generation",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:02:27+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #orpo #llama 3 #rlhf #sft #text-generation #en #dataset-mlabonne/orpo-dpo-mix-40k #license-other #endpoints_compatible #region-us
| Llamacpp Quantizations of OrpoLlama-3-8B
----------------------------------------
Using <a href="URL release <a href="URL for quantization.
Original model: URL
All quants made using imatrix option with dataset provided by Kalomaze here
Prompt format
-------------
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#transformers #gguf #orpo #llama 3 #rlhf #sft #text-generation #en #dataset-mlabonne/orpo-dpo-mix-40k #license-other #endpoints_compatible #region-us \n"
] |
sentence-similarity | sentence-transformers | <h1 align="center">GIST small Embedding v0</h1>
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829)
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets)
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`).
The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741).
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
# Usage
The model can be easily loaded using the Sentence Transformers library.
```Python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
revision = None # Replace with the specific revision to ensure reproducibility if the model is updated.
model = SentenceTransformer("avsolatorio/GIST-small-Embedding-v0", revision=revision)
texts = [
"Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.",
"Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.",
"As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes"
]
# Compute embeddings
embeddings = model.encode(texts, convert_to_tensor=True)
# Compute cosine-similarity for each pair of sentences
scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1)
print(scores.cpu().numpy())
```
# Training Parameters
Below are the training parameters used to fine-tune the model:
```
Epochs = 40
Warmup ratio = 0.1
Learning rate = 5e-6
Batch size = 16
Checkpoint step = 102000
Contrastive loss temperature = 0.01
```
# Evaluation
The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite.
# Citation
Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗
```
@article{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
journal={arXiv preprint arXiv:2402.16829},
year={2024},
URL={https://arxiv.org/abs/2402.16829}
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# Acknowledgements
This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444.
The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. | {"language": ["en"], "license": "mit", "library_name": "sentence-transformers", "tags": ["feature-extraction", "mteb", "sentence-similarity", "sentence-transformers"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "GIST-small-Embedding-v0", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 75.26865671641791}, {"type": "ap", "value": 38.25623793370476}, {"type": "f1", "value": 69.26434651320257}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 93.232225}, {"type": "ap", "value": 89.97936072879344}, {"type": "f1", "value": 93.22122653806187}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 49.715999999999994}, {"type": "f1", "value": 49.169789920136076}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 34.922}, {"type": "map_at_10", "value": 50.524}, {"type": "map_at_100", "value": 51.247}, {"type": "map_at_1000", "value": 51.249}, {"type": "map_at_3", "value": 45.887}, {"type": "map_at_5", "value": 48.592999999999996}, {"type": "mrr_at_1", "value": 34.922}, {"type": "mrr_at_10", "value": 50.382000000000005}, {"type": "mrr_at_100", "value": 51.104000000000006}, {"type": "mrr_at_1000", "value": 51.105999999999995}, {"type": "mrr_at_3", "value": 45.733000000000004}, {"type": "mrr_at_5", "value": 48.428}, {"type": "ndcg_at_1", "value": 34.922}, {"type": "ndcg_at_10", "value": 59.12}, {"type": "ndcg_at_100", "value": 62.083999999999996}, {"type": "ndcg_at_1000", "value": 62.137}, {"type": "ndcg_at_3", "value": 49.616}, {"type": "ndcg_at_5", "value": 54.501}, {"type": "precision_at_1", "value": 34.922}, {"type": "precision_at_10", "value": 8.649}, {"type": "precision_at_100", "value": 0.991}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 20.152}, {"type": "precision_at_5", "value": 14.466999999999999}, {"type": "recall_at_1", "value": 34.922}, {"type": "recall_at_10", "value": 86.48599999999999}, {"type": "recall_at_100", "value": 99.14699999999999}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 60.455000000000005}, {"type": "recall_at_5", "value": 72.333}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 47.623282347623714}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 39.86487843524932}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 62.3290291318171}, {"type": "mrr", "value": 75.2379853141626}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.52002953574285}, {"type": "cos_sim_spearman", "value": 86.98752423842483}, {"type": "euclidean_pearson", "value": 86.89442688314197}, {"type": "euclidean_spearman", "value": 86.88631711307471}, {"type": "manhattan_pearson", "value": 87.03723618507175}, {"type": "manhattan_spearman", "value": 86.76041062975224}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 86.64935064935065}, {"type": "f1", "value": 86.61903824934998}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 39.21904455377494}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 35.43342755570654}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "BeIR/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.843}, {"type": "map_at_10", "value": 43.379}, {"type": "map_at_100", "value": 44.946999999999996}, {"type": "map_at_1000", "value": 45.078}, {"type": "map_at_3", "value": 39.598}, {"type": "map_at_5", "value": 41.746}, {"type": "mrr_at_1", "value": 39.199}, {"type": "mrr_at_10", "value": 49.672}, {"type": "mrr_at_100", "value": 50.321000000000005}, {"type": "mrr_at_1000", "value": 50.365}, {"type": "mrr_at_3", "value": 46.805}, {"type": "mrr_at_5", "value": 48.579}, {"type": "ndcg_at_1", "value": 39.199}, {"type": "ndcg_at_10", "value": 50.163999999999994}, {"type": "ndcg_at_100", "value": 55.418}, {"type": "ndcg_at_1000", "value": 57.353}, {"type": "ndcg_at_3", "value": 44.716}, {"type": "ndcg_at_5", "value": 47.268}, {"type": "precision_at_1", "value": 39.199}, {"type": "precision_at_10", "value": 9.757}, {"type": "precision_at_100", "value": 1.552}, {"type": "precision_at_1000", "value": 0.20500000000000002}, {"type": "precision_at_3", "value": 21.602}, {"type": "precision_at_5", "value": 15.479000000000001}, {"type": "recall_at_1", "value": 31.843}, {"type": "recall_at_10", "value": 62.743}, {"type": "recall_at_100", "value": 84.78099999999999}, {"type": "recall_at_1000", "value": 96.86099999999999}, {"type": "recall_at_3", "value": 46.927}, {"type": "recall_at_5", "value": 54.355}, {"type": "map_at_1", "value": 29.321}, {"type": "map_at_10", "value": 39.062999999999995}, {"type": "map_at_100", "value": 40.403}, {"type": "map_at_1000", "value": 40.534}, {"type": "map_at_3", "value": 36.367}, {"type": "map_at_5", "value": 37.756}, {"type": "mrr_at_1", "value": 35.987}, {"type": "mrr_at_10", "value": 44.708999999999996}, {"type": "mrr_at_100", "value": 45.394}, {"type": "mrr_at_1000", "value": 45.436}, {"type": "mrr_at_3", "value": 42.463}, {"type": "mrr_at_5", "value": 43.663000000000004}, {"type": "ndcg_at_1", "value": 35.987}, {"type": "ndcg_at_10", "value": 44.585}, {"type": "ndcg_at_100", "value": 49.297999999999995}, {"type": "ndcg_at_1000", "value": 51.315}, {"type": "ndcg_at_3", "value": 40.569}, {"type": "ndcg_at_5", "value": 42.197}, {"type": "precision_at_1", "value": 35.987}, {"type": "precision_at_10", "value": 8.369}, {"type": "precision_at_100", "value": 1.366}, {"type": "precision_at_1000", "value": 0.184}, {"type": "precision_at_3", "value": 19.427}, {"type": "precision_at_5", "value": 13.58}, {"type": "recall_at_1", "value": 29.321}, {"type": "recall_at_10", "value": 54.333}, {"type": "recall_at_100", "value": 74.178}, {"type": "recall_at_1000", "value": 86.732}, {"type": "recall_at_3", "value": 42.46}, {"type": "recall_at_5", "value": 47.089999999999996}, {"type": "map_at_1", "value": 38.811}, {"type": "map_at_10", "value": 51.114000000000004}, {"type": "map_at_100", "value": 52.22}, {"type": "map_at_1000", "value": 52.275000000000006}, {"type": "map_at_3", "value": 47.644999999999996}, {"type": "map_at_5", "value": 49.675000000000004}, {"type": "mrr_at_1", "value": 44.389}, {"type": "mrr_at_10", "value": 54.459}, {"type": "mrr_at_100", "value": 55.208999999999996}, {"type": "mrr_at_1000", "value": 55.239000000000004}, {"type": "mrr_at_3", "value": 51.954}, {"type": "mrr_at_5", "value": 53.571999999999996}, {"type": "ndcg_at_1", "value": 44.389}, {"type": "ndcg_at_10", "value": 56.979}, {"type": "ndcg_at_100", "value": 61.266}, {"type": "ndcg_at_1000", "value": 62.315}, {"type": "ndcg_at_3", "value": 51.342}, {"type": "ndcg_at_5", "value": 54.33}, {"type": "precision_at_1", "value": 44.389}, {"type": "precision_at_10", "value": 9.26}, {"type": "precision_at_100", "value": 1.226}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 22.926}, {"type": "precision_at_5", "value": 15.987000000000002}, {"type": "recall_at_1", "value": 38.811}, {"type": "recall_at_10", "value": 70.841}, {"type": "recall_at_100", "value": 89.218}, {"type": "recall_at_1000", "value": 96.482}, {"type": "recall_at_3", "value": 56.123999999999995}, {"type": "recall_at_5", "value": 63.322}, {"type": "map_at_1", "value": 25.378}, {"type": "map_at_10", "value": 34.311}, {"type": "map_at_100", "value": 35.399}, {"type": "map_at_1000", "value": 35.482}, {"type": "map_at_3", "value": 31.917}, {"type": "map_at_5", "value": 33.275}, {"type": "mrr_at_1", "value": 27.683999999999997}, {"type": "mrr_at_10", "value": 36.575}, {"type": "mrr_at_100", "value": 37.492}, {"type": "mrr_at_1000", "value": 37.556}, {"type": "mrr_at_3", "value": 34.35}, {"type": "mrr_at_5", "value": 35.525}, {"type": "ndcg_at_1", "value": 27.683999999999997}, {"type": "ndcg_at_10", "value": 39.247}, {"type": "ndcg_at_100", "value": 44.424}, {"type": "ndcg_at_1000", "value": 46.478}, {"type": "ndcg_at_3", "value": 34.684}, {"type": "ndcg_at_5", "value": 36.886}, {"type": "precision_at_1", "value": 27.683999999999997}, {"type": "precision_at_10", "value": 5.989}, {"type": "precision_at_100", "value": 0.899}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 14.84}, {"type": "precision_at_5", "value": 10.215}, {"type": "recall_at_1", "value": 25.378}, {"type": "recall_at_10", "value": 52.195}, {"type": "recall_at_100", "value": 75.764}, {"type": "recall_at_1000", "value": 91.012}, {"type": "recall_at_3", "value": 39.885999999999996}, {"type": "recall_at_5", "value": 45.279}, {"type": "map_at_1", "value": 17.326}, {"type": "map_at_10", "value": 25.247000000000003}, {"type": "map_at_100", "value": 26.473000000000003}, {"type": "map_at_1000", "value": 26.579000000000004}, {"type": "map_at_3", "value": 22.466}, {"type": "map_at_5", "value": 24.113}, {"type": "mrr_at_1", "value": 21.393}, {"type": "mrr_at_10", "value": 30.187}, {"type": "mrr_at_100", "value": 31.089}, {"type": "mrr_at_1000", "value": 31.15}, {"type": "mrr_at_3", "value": 27.279999999999998}, {"type": "mrr_at_5", "value": 29.127}, {"type": "ndcg_at_1", "value": 21.393}, {"type": "ndcg_at_10", "value": 30.668}, {"type": "ndcg_at_100", "value": 36.543}, {"type": "ndcg_at_1000", "value": 39.181}, {"type": "ndcg_at_3", "value": 25.552000000000003}, {"type": "ndcg_at_5", "value": 28.176000000000002}, {"type": "precision_at_1", "value": 21.393}, {"type": "precision_at_10", "value": 5.784000000000001}, {"type": "precision_at_100", "value": 1.001}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 12.231}, {"type": "precision_at_5", "value": 9.179}, {"type": "recall_at_1", "value": 17.326}, {"type": "recall_at_10", "value": 42.415000000000006}, {"type": "recall_at_100", "value": 68.605}, {"type": "recall_at_1000", "value": 87.694}, {"type": "recall_at_3", "value": 28.343}, {"type": "recall_at_5", "value": 35.086}, {"type": "map_at_1", "value": 29.069}, {"type": "map_at_10", "value": 40.027}, {"type": "map_at_100", "value": 41.308}, {"type": "map_at_1000", "value": 41.412}, {"type": "map_at_3", "value": 36.864000000000004}, {"type": "map_at_5", "value": 38.641999999999996}, {"type": "mrr_at_1", "value": 35.707}, {"type": "mrr_at_10", "value": 45.527}, {"type": "mrr_at_100", "value": 46.348}, {"type": "mrr_at_1000", "value": 46.392}, {"type": "mrr_at_3", "value": 43.086}, {"type": "mrr_at_5", "value": 44.645}, {"type": "ndcg_at_1", "value": 35.707}, {"type": "ndcg_at_10", "value": 46.117000000000004}, {"type": "ndcg_at_100", "value": 51.468}, {"type": "ndcg_at_1000", "value": 53.412000000000006}, {"type": "ndcg_at_3", "value": 41.224}, {"type": "ndcg_at_5", "value": 43.637}, {"type": "precision_at_1", "value": 35.707}, {"type": "precision_at_10", "value": 8.459999999999999}, {"type": "precision_at_100", "value": 1.2970000000000002}, {"type": "precision_at_1000", "value": 0.165}, {"type": "precision_at_3", "value": 19.731}, {"type": "precision_at_5", "value": 14.013}, {"type": "recall_at_1", "value": 29.069}, {"type": "recall_at_10", "value": 58.343999999999994}, {"type": "recall_at_100", "value": 81.296}, {"type": "recall_at_1000", "value": 93.974}, {"type": "recall_at_3", "value": 44.7}, {"type": "recall_at_5", "value": 50.88700000000001}, {"type": "map_at_1", "value": 23.905}, {"type": "map_at_10", "value": 33.983000000000004}, {"type": "map_at_100", "value": 35.372}, {"type": "map_at_1000", "value": 35.487}, {"type": "map_at_3", "value": 30.902}, {"type": "map_at_5", "value": 32.505}, {"type": "mrr_at_1", "value": 29.794999999999998}, {"type": "mrr_at_10", "value": 39.28}, {"type": "mrr_at_100", "value": 40.215}, {"type": "mrr_at_1000", "value": 40.276}, {"type": "mrr_at_3", "value": 36.701}, {"type": "mrr_at_5", "value": 38.105}, {"type": "ndcg_at_1", "value": 29.794999999999998}, {"type": "ndcg_at_10", "value": 40.041}, {"type": "ndcg_at_100", "value": 45.884}, {"type": "ndcg_at_1000", "value": 48.271}, {"type": "ndcg_at_3", "value": 34.931}, {"type": "ndcg_at_5", "value": 37.044}, {"type": "precision_at_1", "value": 29.794999999999998}, {"type": "precision_at_10", "value": 7.546}, {"type": "precision_at_100", "value": 1.216}, {"type": "precision_at_1000", "value": 0.158}, {"type": "precision_at_3", "value": 16.933}, {"type": "precision_at_5", "value": 12.1}, {"type": "recall_at_1", "value": 23.905}, {"type": "recall_at_10", "value": 52.945}, {"type": "recall_at_100", "value": 77.551}, {"type": "recall_at_1000", "value": 93.793}, {"type": "recall_at_3", "value": 38.364}, {"type": "recall_at_5", "value": 44.044}, {"type": "map_at_1", "value": 25.24441666666667}, {"type": "map_at_10", "value": 34.4595}, {"type": "map_at_100", "value": 35.699999999999996}, {"type": "map_at_1000", "value": 35.8155}, {"type": "map_at_3", "value": 31.608333333333338}, {"type": "map_at_5", "value": 33.189416666666666}, {"type": "mrr_at_1", "value": 29.825250000000004}, {"type": "mrr_at_10", "value": 38.60875}, {"type": "mrr_at_100", "value": 39.46575}, {"type": "mrr_at_1000", "value": 39.52458333333333}, {"type": "mrr_at_3", "value": 36.145166666666675}, {"type": "mrr_at_5", "value": 37.57625}, {"type": "ndcg_at_1", "value": 29.825250000000004}, {"type": "ndcg_at_10", "value": 39.88741666666667}, {"type": "ndcg_at_100", "value": 45.17966666666667}, {"type": "ndcg_at_1000", "value": 47.440583333333336}, {"type": "ndcg_at_3", "value": 35.04591666666666}, {"type": "ndcg_at_5", "value": 37.32025}, {"type": "precision_at_1", "value": 29.825250000000004}, {"type": "precision_at_10", "value": 7.07225}, {"type": "precision_at_100", "value": 1.1462499999999998}, {"type": "precision_at_1000", "value": 0.15325}, {"type": "precision_at_3", "value": 16.18375}, {"type": "precision_at_5", "value": 11.526833333333334}, {"type": "recall_at_1", "value": 25.24441666666667}, {"type": "recall_at_10", "value": 51.744916666666676}, {"type": "recall_at_100", "value": 75.04574999999998}, {"type": "recall_at_1000", "value": 90.65558333333334}, {"type": "recall_at_3", "value": 38.28349999999999}, {"type": "recall_at_5", "value": 44.16591666666667}, {"type": "map_at_1", "value": 24.237000000000002}, {"type": "map_at_10", "value": 30.667}, {"type": "map_at_100", "value": 31.592}, {"type": "map_at_1000", "value": 31.688}, {"type": "map_at_3", "value": 28.810999999999996}, {"type": "map_at_5", "value": 29.788999999999998}, {"type": "mrr_at_1", "value": 26.840000000000003}, {"type": "mrr_at_10", "value": 33.305}, {"type": "mrr_at_100", "value": 34.089000000000006}, {"type": "mrr_at_1000", "value": 34.159}, {"type": "mrr_at_3", "value": 31.518}, {"type": "mrr_at_5", "value": 32.469}, {"type": "ndcg_at_1", "value": 26.840000000000003}, {"type": "ndcg_at_10", "value": 34.541}, {"type": "ndcg_at_100", "value": 39.206}, {"type": "ndcg_at_1000", "value": 41.592}, {"type": "ndcg_at_3", "value": 31.005}, {"type": "ndcg_at_5", "value": 32.554}, {"type": "precision_at_1", "value": 26.840000000000003}, {"type": "precision_at_10", "value": 5.3069999999999995}, {"type": "precision_at_100", "value": 0.8340000000000001}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 13.292000000000002}, {"type": "precision_at_5", "value": 9.049}, {"type": "recall_at_1", "value": 24.237000000000002}, {"type": "recall_at_10", "value": 43.862}, {"type": "recall_at_100", "value": 65.352}, {"type": "recall_at_1000", "value": 82.704}, {"type": "recall_at_3", "value": 34.009}, {"type": "recall_at_5", "value": 37.878}, {"type": "map_at_1", "value": 16.482}, {"type": "map_at_10", "value": 23.249}, {"type": "map_at_100", "value": 24.388}, {"type": "map_at_1000", "value": 24.519}, {"type": "map_at_3", "value": 20.971}, {"type": "map_at_5", "value": 22.192}, {"type": "mrr_at_1", "value": 19.993}, {"type": "mrr_at_10", "value": 26.985}, {"type": "mrr_at_100", "value": 27.975}, {"type": "mrr_at_1000", "value": 28.052}, {"type": "mrr_at_3", "value": 24.954}, {"type": "mrr_at_5", "value": 26.070999999999998}, {"type": "ndcg_at_1", "value": 19.993}, {"type": "ndcg_at_10", "value": 27.656}, {"type": "ndcg_at_100", "value": 33.256}, {"type": "ndcg_at_1000", "value": 36.275}, {"type": "ndcg_at_3", "value": 23.644000000000002}, {"type": "ndcg_at_5", "value": 25.466}, {"type": "precision_at_1", "value": 19.993}, {"type": "precision_at_10", "value": 5.093}, {"type": "precision_at_100", "value": 0.932}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 11.149000000000001}, {"type": "precision_at_5", "value": 8.149000000000001}, {"type": "recall_at_1", "value": 16.482}, {"type": "recall_at_10", "value": 37.141999999999996}, {"type": "recall_at_100", "value": 62.696}, {"type": "recall_at_1000", "value": 84.333}, {"type": "recall_at_3", "value": 26.031}, {"type": "recall_at_5", "value": 30.660999999999998}, {"type": "map_at_1", "value": 24.887999999999998}, {"type": "map_at_10", "value": 34.101}, {"type": "map_at_100", "value": 35.27}, {"type": "map_at_1000", "value": 35.370000000000005}, {"type": "map_at_3", "value": 31.283}, {"type": "map_at_5", "value": 32.72}, {"type": "mrr_at_1", "value": 29.011}, {"type": "mrr_at_10", "value": 38.004}, {"type": "mrr_at_100", "value": 38.879000000000005}, {"type": "mrr_at_1000", "value": 38.938}, {"type": "mrr_at_3", "value": 35.571999999999996}, {"type": "mrr_at_5", "value": 36.789}, {"type": "ndcg_at_1", "value": 29.011}, {"type": "ndcg_at_10", "value": 39.586}, {"type": "ndcg_at_100", "value": 44.939}, {"type": "ndcg_at_1000", "value": 47.236}, {"type": "ndcg_at_3", "value": 34.4}, {"type": "ndcg_at_5", "value": 36.519}, {"type": "precision_at_1", "value": 29.011}, {"type": "precision_at_10", "value": 6.763}, {"type": "precision_at_100", "value": 1.059}, {"type": "precision_at_1000", "value": 0.13699999999999998}, {"type": "precision_at_3", "value": 15.609}, {"type": "precision_at_5", "value": 10.896}, {"type": "recall_at_1", "value": 24.887999999999998}, {"type": "recall_at_10", "value": 52.42}, {"type": "recall_at_100", "value": 75.803}, {"type": "recall_at_1000", "value": 91.725}, {"type": "recall_at_3", "value": 38.080999999999996}, {"type": "recall_at_5", "value": 43.47}, {"type": "map_at_1", "value": 23.953}, {"type": "map_at_10", "value": 32.649}, {"type": "map_at_100", "value": 34.181}, {"type": "map_at_1000", "value": 34.398}, {"type": "map_at_3", "value": 29.567}, {"type": "map_at_5", "value": 31.263}, {"type": "mrr_at_1", "value": 29.051}, {"type": "mrr_at_10", "value": 37.419999999999995}, {"type": "mrr_at_100", "value": 38.396}, {"type": "mrr_at_1000", "value": 38.458}, {"type": "mrr_at_3", "value": 34.782999999999994}, {"type": "mrr_at_5", "value": 36.254999999999995}, {"type": "ndcg_at_1", "value": 29.051}, {"type": "ndcg_at_10", "value": 38.595}, {"type": "ndcg_at_100", "value": 44.6}, {"type": "ndcg_at_1000", "value": 47.158}, {"type": "ndcg_at_3", "value": 33.56}, {"type": "ndcg_at_5", "value": 35.870000000000005}, {"type": "precision_at_1", "value": 29.051}, {"type": "precision_at_10", "value": 7.53}, {"type": "precision_at_100", "value": 1.538}, {"type": "precision_at_1000", "value": 0.24}, {"type": "precision_at_3", "value": 15.744}, {"type": "precision_at_5", "value": 11.542}, {"type": "recall_at_1", "value": 23.953}, {"type": "recall_at_10", "value": 50.08200000000001}, {"type": "recall_at_100", "value": 77.364}, {"type": "recall_at_1000", "value": 93.57799999999999}, {"type": "recall_at_3", "value": 35.432}, {"type": "recall_at_5", "value": 41.875}, {"type": "map_at_1", "value": 17.72}, {"type": "map_at_10", "value": 25.724000000000004}, {"type": "map_at_100", "value": 26.846999999999998}, {"type": "map_at_1000", "value": 26.964}, {"type": "map_at_3", "value": 22.909}, {"type": "map_at_5", "value": 24.596999999999998}, {"type": "mrr_at_1", "value": 18.854000000000003}, {"type": "mrr_at_10", "value": 27.182000000000002}, {"type": "mrr_at_100", "value": 28.182000000000002}, {"type": "mrr_at_1000", "value": 28.274}, {"type": "mrr_at_3", "value": 24.276}, {"type": "mrr_at_5", "value": 26.115}, {"type": "ndcg_at_1", "value": 18.854000000000003}, {"type": "ndcg_at_10", "value": 30.470000000000002}, {"type": "ndcg_at_100", "value": 35.854}, {"type": "ndcg_at_1000", "value": 38.701}, {"type": "ndcg_at_3", "value": 24.924}, {"type": "ndcg_at_5", "value": 27.895999999999997}, {"type": "precision_at_1", "value": 18.854000000000003}, {"type": "precision_at_10", "value": 5.009}, {"type": "precision_at_100", "value": 0.835}, {"type": "precision_at_1000", "value": 0.117}, {"type": "precision_at_3", "value": 10.721}, {"type": "precision_at_5", "value": 8.133}, {"type": "recall_at_1", "value": 17.72}, {"type": "recall_at_10", "value": 43.617}, {"type": "recall_at_100", "value": 67.941}, {"type": "recall_at_1000", "value": 88.979}, {"type": "recall_at_3", "value": 29.044999999999998}, {"type": "recall_at_5", "value": 36.044}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.427}, {"type": "map_at_10", "value": 22.935}, {"type": "map_at_100", "value": 24.808}, {"type": "map_at_1000", "value": 24.994}, {"type": "map_at_3", "value": 19.533}, {"type": "map_at_5", "value": 21.261}, {"type": "mrr_at_1", "value": 30.945}, {"type": "mrr_at_10", "value": 43.242000000000004}, {"type": "mrr_at_100", "value": 44.013999999999996}, {"type": "mrr_at_1000", "value": 44.048}, {"type": "mrr_at_3", "value": 40.109}, {"type": "mrr_at_5", "value": 42.059999999999995}, {"type": "ndcg_at_1", "value": 30.945}, {"type": "ndcg_at_10", "value": 31.828}, {"type": "ndcg_at_100", "value": 38.801}, {"type": "ndcg_at_1000", "value": 42.126999999999995}, {"type": "ndcg_at_3", "value": 26.922}, {"type": "ndcg_at_5", "value": 28.483999999999998}, {"type": "precision_at_1", "value": 30.945}, {"type": "precision_at_10", "value": 9.844}, {"type": "precision_at_100", "value": 1.7309999999999999}, {"type": "precision_at_1000", "value": 0.23500000000000001}, {"type": "precision_at_3", "value": 20.477999999999998}, {"type": "precision_at_5", "value": 15.27}, {"type": "recall_at_1", "value": 13.427}, {"type": "recall_at_10", "value": 37.141000000000005}, {"type": "recall_at_100", "value": 61.007}, {"type": "recall_at_1000", "value": 79.742}, {"type": "recall_at_3", "value": 24.431}, {"type": "recall_at_5", "value": 29.725}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.122}, {"type": "map_at_10", "value": 18.799}, {"type": "map_at_100", "value": 25.724999999999998}, {"type": "map_at_1000", "value": 27.205000000000002}, {"type": "map_at_3", "value": 14.194999999999999}, {"type": "map_at_5", "value": 16.225}, {"type": "mrr_at_1", "value": 68.0}, {"type": "mrr_at_10", "value": 76.035}, {"type": "mrr_at_100", "value": 76.292}, {"type": "mrr_at_1000", "value": 76.297}, {"type": "mrr_at_3", "value": 74.458}, {"type": "mrr_at_5", "value": 75.558}, {"type": "ndcg_at_1", "value": 56.00000000000001}, {"type": "ndcg_at_10", "value": 39.761}, {"type": "ndcg_at_100", "value": 43.736999999999995}, {"type": "ndcg_at_1000", "value": 51.146}, {"type": "ndcg_at_3", "value": 45.921}, {"type": "ndcg_at_5", "value": 42.756}, {"type": "precision_at_1", "value": 68.0}, {"type": "precision_at_10", "value": 30.275000000000002}, {"type": "precision_at_100", "value": 9.343}, {"type": "precision_at_1000", "value": 1.8270000000000002}, {"type": "precision_at_3", "value": 49.167}, {"type": "precision_at_5", "value": 40.699999999999996}, {"type": "recall_at_1", "value": 9.122}, {"type": "recall_at_10", "value": 23.669999999999998}, {"type": "recall_at_100", "value": 48.719}, {"type": "recall_at_1000", "value": 72.033}, {"type": "recall_at_3", "value": 15.498999999999999}, {"type": "recall_at_5", "value": 18.657}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 55.885000000000005}, {"type": "f1", "value": 50.70726446938571}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 75.709}, {"type": "map_at_10", "value": 83.345}, {"type": "map_at_100", "value": 83.557}, {"type": "map_at_1000", "value": 83.572}, {"type": "map_at_3", "value": 82.425}, {"type": "map_at_5", "value": 83.013}, {"type": "mrr_at_1", "value": 81.593}, {"type": "mrr_at_10", "value": 88.331}, {"type": "mrr_at_100", "value": 88.408}, {"type": "mrr_at_1000", "value": 88.41}, {"type": "mrr_at_3", "value": 87.714}, {"type": "mrr_at_5", "value": 88.122}, {"type": "ndcg_at_1", "value": 81.593}, {"type": "ndcg_at_10", "value": 86.925}, {"type": "ndcg_at_100", "value": 87.67}, {"type": "ndcg_at_1000", "value": 87.924}, {"type": "ndcg_at_3", "value": 85.5}, {"type": "ndcg_at_5", "value": 86.283}, {"type": "precision_at_1", "value": 81.593}, {"type": "precision_at_10", "value": 10.264}, {"type": "precision_at_100", "value": 1.084}, {"type": "precision_at_1000", "value": 0.11199999999999999}, {"type": "precision_at_3", "value": 32.388}, {"type": "precision_at_5", "value": 19.991}, {"type": "recall_at_1", "value": 75.709}, {"type": "recall_at_10", "value": 93.107}, {"type": "recall_at_100", "value": 96.024}, {"type": "recall_at_1000", "value": 97.603}, {"type": "recall_at_3", "value": 89.08500000000001}, {"type": "recall_at_5", "value": 91.15299999999999}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.121}, {"type": "map_at_10", "value": 31.78}, {"type": "map_at_100", "value": 33.497}, {"type": "map_at_1000", "value": 33.696}, {"type": "map_at_3", "value": 27.893}, {"type": "map_at_5", "value": 30.087000000000003}, {"type": "mrr_at_1", "value": 38.272}, {"type": "mrr_at_10", "value": 47.176}, {"type": "mrr_at_100", "value": 48.002}, {"type": "mrr_at_1000", "value": 48.044}, {"type": "mrr_at_3", "value": 45.086999999999996}, {"type": "mrr_at_5", "value": 46.337}, {"type": "ndcg_at_1", "value": 38.272}, {"type": "ndcg_at_10", "value": 39.145}, {"type": "ndcg_at_100", "value": 45.696999999999996}, {"type": "ndcg_at_1000", "value": 49.0}, {"type": "ndcg_at_3", "value": 36.148}, {"type": "ndcg_at_5", "value": 37.023}, {"type": "precision_at_1", "value": 38.272}, {"type": "precision_at_10", "value": 11.065}, {"type": "precision_at_100", "value": 1.7840000000000003}, {"type": "precision_at_1000", "value": 0.23600000000000002}, {"type": "precision_at_3", "value": 24.587999999999997}, {"type": "precision_at_5", "value": 18.056}, {"type": "recall_at_1", "value": 19.121}, {"type": "recall_at_10", "value": 44.857}, {"type": "recall_at_100", "value": 69.774}, {"type": "recall_at_1000", "value": 89.645}, {"type": "recall_at_3", "value": 32.588}, {"type": "recall_at_5", "value": 37.939}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 36.428}, {"type": "map_at_10", "value": 56.891999999999996}, {"type": "map_at_100", "value": 57.82899999999999}, {"type": "map_at_1000", "value": 57.896}, {"type": "map_at_3", "value": 53.762}, {"type": "map_at_5", "value": 55.718}, {"type": "mrr_at_1", "value": 72.856}, {"type": "mrr_at_10", "value": 79.245}, {"type": "mrr_at_100", "value": 79.515}, {"type": "mrr_at_1000", "value": 79.525}, {"type": "mrr_at_3", "value": 78.143}, {"type": "mrr_at_5", "value": 78.822}, {"type": "ndcg_at_1", "value": 72.856}, {"type": "ndcg_at_10", "value": 65.204}, {"type": "ndcg_at_100", "value": 68.552}, {"type": "ndcg_at_1000", "value": 69.902}, {"type": "ndcg_at_3", "value": 60.632}, {"type": "ndcg_at_5", "value": 63.161}, {"type": "precision_at_1", "value": 72.856}, {"type": "precision_at_10", "value": 13.65}, {"type": "precision_at_100", "value": 1.6260000000000001}, {"type": "precision_at_1000", "value": 0.181}, {"type": "precision_at_3", "value": 38.753}, {"type": "precision_at_5", "value": 25.251}, {"type": "recall_at_1", "value": 36.428}, {"type": "recall_at_10", "value": 68.25099999999999}, {"type": "recall_at_100", "value": 81.317}, {"type": "recall_at_1000", "value": 90.27}, {"type": "recall_at_3", "value": 58.13}, {"type": "recall_at_5", "value": 63.126000000000005}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 89.4868}, {"type": "ap", "value": 84.88319192880247}, {"type": "f1", "value": 89.46144458052846}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.282999999999998}, {"type": "map_at_10", "value": 33.045}, {"type": "map_at_100", "value": 34.238}, {"type": "map_at_1000", "value": 34.29}, {"type": "map_at_3", "value": 29.305999999999997}, {"type": "map_at_5", "value": 31.391000000000002}, {"type": "mrr_at_1", "value": 21.92}, {"type": "mrr_at_10", "value": 33.649}, {"type": "mrr_at_100", "value": 34.791}, {"type": "mrr_at_1000", "value": 34.837}, {"type": "mrr_at_3", "value": 30.0}, {"type": "mrr_at_5", "value": 32.039}, {"type": "ndcg_at_1", "value": 21.92}, {"type": "ndcg_at_10", "value": 39.729}, {"type": "ndcg_at_100", "value": 45.484}, {"type": "ndcg_at_1000", "value": 46.817}, {"type": "ndcg_at_3", "value": 32.084}, {"type": "ndcg_at_5", "value": 35.789}, {"type": "precision_at_1", "value": 21.92}, {"type": "precision_at_10", "value": 6.297}, {"type": "precision_at_100", "value": 0.918}, {"type": "precision_at_1000", "value": 0.10300000000000001}, {"type": "precision_at_3", "value": 13.639000000000001}, {"type": "precision_at_5", "value": 10.054}, {"type": "recall_at_1", "value": 21.282999999999998}, {"type": "recall_at_10", "value": 60.343999999999994}, {"type": "recall_at_100", "value": 86.981}, {"type": "recall_at_1000", "value": 97.205}, {"type": "recall_at_3", "value": 39.452999999999996}, {"type": "recall_at_5", "value": 48.333}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 95.47879616963064}, {"type": "f1", "value": 95.21800589958251}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 79.09256725946192}, {"type": "f1", "value": 60.554043889452515}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 75.53463349024882}, {"type": "f1", "value": 73.14418495756476}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 79.22663080026899}, {"type": "f1", "value": 79.331456217501}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 34.50316010430136}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 32.15612040042282}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.36227552557184}, {"type": "mrr", "value": 33.57901344209811}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.6610000000000005}, {"type": "map_at_10", "value": 12.992}, {"type": "map_at_100", "value": 16.756999999999998}, {"type": "map_at_1000", "value": 18.25}, {"type": "map_at_3", "value": 9.471}, {"type": "map_at_5", "value": 11.116}, {"type": "mrr_at_1", "value": 43.653}, {"type": "mrr_at_10", "value": 53.388999999999996}, {"type": "mrr_at_100", "value": 53.982}, {"type": "mrr_at_1000", "value": 54.033}, {"type": "mrr_at_3", "value": 51.858000000000004}, {"type": "mrr_at_5", "value": 53.019000000000005}, {"type": "ndcg_at_1", "value": 41.641}, {"type": "ndcg_at_10", "value": 34.691}, {"type": "ndcg_at_100", "value": 32.305}, {"type": "ndcg_at_1000", "value": 41.132999999999996}, {"type": "ndcg_at_3", "value": 40.614}, {"type": "ndcg_at_5", "value": 38.456}, {"type": "precision_at_1", "value": 43.344}, {"type": "precision_at_10", "value": 25.881999999999998}, {"type": "precision_at_100", "value": 8.483}, {"type": "precision_at_1000", "value": 2.131}, {"type": "precision_at_3", "value": 38.803}, {"type": "precision_at_5", "value": 33.87}, {"type": "recall_at_1", "value": 5.6610000000000005}, {"type": "recall_at_10", "value": 16.826}, {"type": "recall_at_100", "value": 32.939}, {"type": "recall_at_1000", "value": 65.161}, {"type": "recall_at_3", "value": 10.756}, {"type": "recall_at_5", "value": 13.331000000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.692}, {"type": "map_at_10", "value": 41.065000000000005}, {"type": "map_at_100", "value": 42.235}, {"type": "map_at_1000", "value": 42.27}, {"type": "map_at_3", "value": 36.635}, {"type": "map_at_5", "value": 39.219}, {"type": "mrr_at_1", "value": 30.214000000000002}, {"type": "mrr_at_10", "value": 43.443}, {"type": "mrr_at_100", "value": 44.326}, {"type": "mrr_at_1000", "value": 44.352000000000004}, {"type": "mrr_at_3", "value": 39.623999999999995}, {"type": "mrr_at_5", "value": 41.898}, {"type": "ndcg_at_1", "value": 30.214000000000002}, {"type": "ndcg_at_10", "value": 48.692}, {"type": "ndcg_at_100", "value": 53.671}, {"type": "ndcg_at_1000", "value": 54.522000000000006}, {"type": "ndcg_at_3", "value": 40.245}, {"type": "ndcg_at_5", "value": 44.580999999999996}, {"type": "precision_at_1", "value": 30.214000000000002}, {"type": "precision_at_10", "value": 8.3}, {"type": "precision_at_100", "value": 1.1079999999999999}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 18.521}, {"type": "precision_at_5", "value": 13.627}, {"type": "recall_at_1", "value": 26.692}, {"type": "recall_at_10", "value": 69.699}, {"type": "recall_at_100", "value": 91.425}, {"type": "recall_at_1000", "value": 97.78099999999999}, {"type": "recall_at_3", "value": 47.711}, {"type": "recall_at_5", "value": 57.643}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 70.962}, {"type": "map_at_10", "value": 84.772}, {"type": "map_at_100", "value": 85.402}, {"type": "map_at_1000", "value": 85.418}, {"type": "map_at_3", "value": 81.89}, {"type": "map_at_5", "value": 83.685}, {"type": "mrr_at_1", "value": 81.67}, {"type": "mrr_at_10", "value": 87.681}, {"type": "mrr_at_100", "value": 87.792}, {"type": "mrr_at_1000", "value": 87.79299999999999}, {"type": "mrr_at_3", "value": 86.803}, {"type": "mrr_at_5", "value": 87.392}, {"type": "ndcg_at_1", "value": 81.69}, {"type": "ndcg_at_10", "value": 88.429}, {"type": "ndcg_at_100", "value": 89.66}, {"type": "ndcg_at_1000", "value": 89.762}, {"type": "ndcg_at_3", "value": 85.75}, {"type": "ndcg_at_5", "value": 87.20700000000001}, {"type": "precision_at_1", "value": 81.69}, {"type": "precision_at_10", "value": 13.395000000000001}, {"type": "precision_at_100", "value": 1.528}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 37.507000000000005}, {"type": "precision_at_5", "value": 24.614}, {"type": "recall_at_1", "value": 70.962}, {"type": "recall_at_10", "value": 95.339}, {"type": "recall_at_100", "value": 99.543}, {"type": "recall_at_1000", "value": 99.984}, {"type": "recall_at_3", "value": 87.54899999999999}, {"type": "recall_at_5", "value": 91.726}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 55.506631779239555}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 60.63731341848479}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.852}, {"type": "map_at_10", "value": 13.175}, {"type": "map_at_100", "value": 15.623999999999999}, {"type": "map_at_1000", "value": 16.002}, {"type": "map_at_3", "value": 9.103}, {"type": "map_at_5", "value": 11.068999999999999}, {"type": "mrr_at_1", "value": 23.9}, {"type": "mrr_at_10", "value": 35.847}, {"type": "mrr_at_100", "value": 36.968}, {"type": "mrr_at_1000", "value": 37.018}, {"type": "mrr_at_3", "value": 32.300000000000004}, {"type": "mrr_at_5", "value": 34.14}, {"type": "ndcg_at_1", "value": 23.9}, {"type": "ndcg_at_10", "value": 21.889}, {"type": "ndcg_at_100", "value": 30.903000000000002}, {"type": "ndcg_at_1000", "value": 36.992000000000004}, {"type": "ndcg_at_3", "value": 20.274}, {"type": "ndcg_at_5", "value": 17.773}, {"type": "precision_at_1", "value": 23.9}, {"type": "precision_at_10", "value": 11.61}, {"type": "precision_at_100", "value": 2.4539999999999997}, {"type": "precision_at_1000", "value": 0.391}, {"type": "precision_at_3", "value": 19.133}, {"type": "precision_at_5", "value": 15.740000000000002}, {"type": "recall_at_1", "value": 4.852}, {"type": "recall_at_10", "value": 23.507}, {"type": "recall_at_100", "value": 49.775000000000006}, {"type": "recall_at_1000", "value": 79.308}, {"type": "recall_at_3", "value": 11.637}, {"type": "recall_at_5", "value": 15.947}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_pearson", "value": 86.03345827446948}, {"type": "cos_sim_spearman", "value": 80.53174518259549}, {"type": "euclidean_pearson", "value": 83.44538971660883}, {"type": "euclidean_spearman", "value": 80.57344324098692}, {"type": "manhattan_pearson", "value": 83.36528808195459}, {"type": "manhattan_spearman", "value": 80.48931287157902}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.21363088257881}, {"type": "cos_sim_spearman", "value": 75.56589127055523}, {"type": "euclidean_pearson", "value": 82.32868324521908}, {"type": "euclidean_spearman", "value": 75.31928550664554}, {"type": "manhattan_pearson", "value": 82.31332875713211}, {"type": "manhattan_spearman", "value": 75.35376322099196}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.09085593258487}, {"type": "cos_sim_spearman", "value": 86.26355088415221}, {"type": "euclidean_pearson", "value": 85.49646115361156}, {"type": "euclidean_spearman", "value": 86.20652472228703}, {"type": "manhattan_pearson", "value": 85.44084081123815}, {"type": "manhattan_spearman", "value": 86.1162623448951}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_pearson", "value": 84.68250248349368}, {"type": "cos_sim_spearman", "value": 82.29883673695083}, {"type": "euclidean_pearson", "value": 84.17633035446019}, {"type": "euclidean_spearman", "value": 82.19990511264791}, {"type": "manhattan_pearson", "value": 84.17408410692279}, {"type": "manhattan_spearman", "value": 82.249873895981}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_pearson", "value": 87.31878760045024}, {"type": "cos_sim_spearman", "value": 88.7364409031183}, {"type": "euclidean_pearson", "value": 88.230537618603}, {"type": "euclidean_spearman", "value": 88.76484309646318}, {"type": "manhattan_pearson", "value": 88.17689071136469}, {"type": "manhattan_spearman", "value": 88.72809249037928}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_pearson", "value": 83.41078559110638}, {"type": "cos_sim_spearman", "value": 85.27439135411049}, {"type": "euclidean_pearson", "value": 84.5333571592088}, {"type": "euclidean_spearman", "value": 85.25645460575957}, {"type": "manhattan_pearson", "value": 84.38428921610226}, {"type": "manhattan_spearman", "value": 85.07796040798796}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_pearson", "value": 88.82374132382576}, {"type": "cos_sim_spearman", "value": 89.02101343562433}, {"type": "euclidean_pearson", "value": 89.50729765458932}, {"type": "euclidean_spearman", "value": 89.04184772869253}, {"type": "manhattan_pearson", "value": 89.51737904059856}, {"type": "manhattan_spearman", "value": 89.12925950440676}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_pearson", "value": 67.56051823873482}, {"type": "cos_sim_spearman", "value": 68.50988748185463}, {"type": "euclidean_pearson", "value": 69.16524346147456}, {"type": "euclidean_spearman", "value": 68.61859952449579}, {"type": "manhattan_pearson", "value": 69.10618915706995}, {"type": "manhattan_spearman", "value": 68.36401769459522}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_pearson", "value": 85.4159693872625}, {"type": "cos_sim_spearman", "value": 87.07819121764247}, {"type": "euclidean_pearson", "value": 87.03013260863153}, {"type": "euclidean_spearman", "value": 87.06547293631309}, {"type": "manhattan_pearson", "value": 86.8129744446062}, {"type": "manhattan_spearman", "value": 86.88494096335627}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.47758088996575}, {"type": "mrr", "value": 96.17891458577733}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 57.538999999999994}, {"type": "map_at_10", "value": 66.562}, {"type": "map_at_100", "value": 67.254}, {"type": "map_at_1000", "value": 67.284}, {"type": "map_at_3", "value": 63.722}, {"type": "map_at_5", "value": 65.422}, {"type": "mrr_at_1", "value": 60.0}, {"type": "mrr_at_10", "value": 67.354}, {"type": "mrr_at_100", "value": 67.908}, {"type": "mrr_at_1000", "value": 67.93299999999999}, {"type": "mrr_at_3", "value": 65.056}, {"type": "mrr_at_5", "value": 66.43900000000001}, {"type": "ndcg_at_1", "value": 60.0}, {"type": "ndcg_at_10", "value": 70.858}, {"type": "ndcg_at_100", "value": 73.67099999999999}, {"type": "ndcg_at_1000", "value": 74.26700000000001}, {"type": "ndcg_at_3", "value": 65.911}, {"type": "ndcg_at_5", "value": 68.42200000000001}, {"type": "precision_at_1", "value": 60.0}, {"type": "precision_at_10", "value": 9.4}, {"type": "precision_at_100", "value": 1.083}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 25.444}, {"type": "precision_at_5", "value": 17.0}, {"type": "recall_at_1", "value": 57.538999999999994}, {"type": "recall_at_10", "value": 83.233}, {"type": "recall_at_100", "value": 95.667}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 69.883}, {"type": "recall_at_5", "value": 76.19399999999999}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.82574257425742}, {"type": "cos_sim_ap", "value": 95.78722833053911}, {"type": "cos_sim_f1", "value": 90.94650205761316}, {"type": "cos_sim_precision", "value": 93.64406779661016}, {"type": "cos_sim_recall", "value": 88.4}, {"type": "dot_accuracy", "value": 99.83366336633664}, {"type": "dot_ap", "value": 95.89733601612964}, {"type": "dot_f1", "value": 91.41981613891727}, {"type": "dot_precision", "value": 93.42379958246346}, {"type": "dot_recall", "value": 89.5}, {"type": "euclidean_accuracy", "value": 99.82574257425742}, {"type": "euclidean_ap", "value": 95.75227035138846}, {"type": "euclidean_f1", "value": 90.96509240246407}, {"type": "euclidean_precision", "value": 93.45991561181435}, {"type": "euclidean_recall", "value": 88.6}, {"type": "manhattan_accuracy", "value": 99.82574257425742}, {"type": "manhattan_ap", "value": 95.76278266220176}, {"type": "manhattan_f1", "value": 91.08409321175279}, {"type": "manhattan_precision", "value": 92.29979466119097}, {"type": "manhattan_recall", "value": 89.9}, {"type": "max_accuracy", "value": 99.83366336633664}, {"type": "max_ap", "value": 95.89733601612964}, {"type": "max_f1", "value": 91.41981613891727}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 61.905425988638605}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 36.159589881679736}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 53.0605499476397}, {"type": "mrr", "value": 53.91594516594517}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.202718009067}, {"type": "cos_sim_spearman", "value": 31.136199912366987}, {"type": "dot_pearson", "value": 30.66329011927951}, {"type": "dot_spearman", "value": 30.107664909625107}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.209}, {"type": "map_at_10", "value": 1.712}, {"type": "map_at_100", "value": 9.464}, {"type": "map_at_1000", "value": 23.437}, {"type": "map_at_3", "value": 0.609}, {"type": "map_at_5", "value": 0.9440000000000001}, {"type": "mrr_at_1", "value": 78.0}, {"type": "mrr_at_10", "value": 86.833}, {"type": "mrr_at_100", "value": 86.833}, {"type": "mrr_at_1000", "value": 86.833}, {"type": "mrr_at_3", "value": 85.333}, {"type": "mrr_at_5", "value": 86.833}, {"type": "ndcg_at_1", "value": 74.0}, {"type": "ndcg_at_10", "value": 69.14}, {"type": "ndcg_at_100", "value": 53.047999999999995}, {"type": "ndcg_at_1000", "value": 48.577}, {"type": "ndcg_at_3", "value": 75.592}, {"type": "ndcg_at_5", "value": 72.509}, {"type": "precision_at_1", "value": 78.0}, {"type": "precision_at_10", "value": 73.0}, {"type": "precision_at_100", "value": 54.44}, {"type": "precision_at_1000", "value": 21.326}, {"type": "precision_at_3", "value": 80.667}, {"type": "precision_at_5", "value": 77.2}, {"type": "recall_at_1", "value": 0.209}, {"type": "recall_at_10", "value": 1.932}, {"type": "recall_at_100", "value": 13.211999999999998}, {"type": "recall_at_1000", "value": 45.774}, {"type": "recall_at_3", "value": 0.644}, {"type": "recall_at_5", "value": 1.0290000000000001}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.609}, {"type": "map_at_10", "value": 8.334999999999999}, {"type": "map_at_100", "value": 14.604000000000001}, {"type": "map_at_1000", "value": 16.177}, {"type": "map_at_3", "value": 4.87}, {"type": "map_at_5", "value": 6.3149999999999995}, {"type": "mrr_at_1", "value": 32.653}, {"type": "mrr_at_10", "value": 45.047}, {"type": "mrr_at_100", "value": 45.808}, {"type": "mrr_at_1000", "value": 45.808}, {"type": "mrr_at_3", "value": 41.497}, {"type": "mrr_at_5", "value": 43.231}, {"type": "ndcg_at_1", "value": 30.612000000000002}, {"type": "ndcg_at_10", "value": 21.193}, {"type": "ndcg_at_100", "value": 34.97}, {"type": "ndcg_at_1000", "value": 46.69}, {"type": "ndcg_at_3", "value": 24.823}, {"type": "ndcg_at_5", "value": 22.872999999999998}, {"type": "precision_at_1", "value": 32.653}, {"type": "precision_at_10", "value": 17.959}, {"type": "precision_at_100", "value": 7.4079999999999995}, {"type": "precision_at_1000", "value": 1.537}, {"type": "precision_at_3", "value": 25.85}, {"type": "precision_at_5", "value": 22.448999999999998}, {"type": "recall_at_1", "value": 2.609}, {"type": "recall_at_10", "value": 13.63}, {"type": "recall_at_100", "value": 47.014}, {"type": "recall_at_1000", "value": 83.176}, {"type": "recall_at_3", "value": 5.925}, {"type": "recall_at_5", "value": 8.574}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 72.80239999999999}, {"type": "ap", "value": 15.497911013214791}, {"type": "f1", "value": 56.258411577947285}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.00452744765139}, {"type": "f1", "value": 61.42228624410908}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 50.00516915962345}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 85.62317458425225}, {"type": "cos_sim_ap", "value": 72.95115658063823}, {"type": "cos_sim_f1", "value": 66.78976523344764}, {"type": "cos_sim_precision", "value": 66.77215189873418}, {"type": "cos_sim_recall", "value": 66.80738786279683}, {"type": "dot_accuracy", "value": 85.62317458425225}, {"type": "dot_ap", "value": 73.10385271517778}, {"type": "dot_f1", "value": 66.94853829427399}, {"type": "dot_precision", "value": 61.74242424242424}, {"type": "dot_recall", "value": 73.11345646437995}, {"type": "euclidean_accuracy", "value": 85.65893783155511}, {"type": "euclidean_ap", "value": 72.87428208473992}, {"type": "euclidean_f1", "value": 66.70919994896005}, {"type": "euclidean_precision", "value": 64.5910551025451}, {"type": "euclidean_recall", "value": 68.97097625329816}, {"type": "manhattan_accuracy", "value": 85.59933241938367}, {"type": "manhattan_ap", "value": 72.67282695064966}, {"type": "manhattan_f1", "value": 66.67537215983286}, {"type": "manhattan_precision", "value": 66.00310237849017}, {"type": "manhattan_recall", "value": 67.36147757255937}, {"type": "max_accuracy", "value": 85.65893783155511}, {"type": "max_ap", "value": 73.10385271517778}, {"type": "max_f1", "value": 66.94853829427399}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.69096130709822}, {"type": "cos_sim_ap", "value": 85.30326978668063}, {"type": "cos_sim_f1", "value": 77.747088683189}, {"type": "cos_sim_precision", "value": 75.4491451753115}, {"type": "cos_sim_recall", "value": 80.189405605174}, {"type": "dot_accuracy", "value": 88.43870066363954}, {"type": "dot_ap", "value": 84.62999949222983}, {"type": "dot_f1", "value": 77.3074661963551}, {"type": "dot_precision", "value": 73.93871239808828}, {"type": "dot_recall", "value": 80.99784416384355}, {"type": "euclidean_accuracy", "value": 88.70066363953894}, {"type": "euclidean_ap", "value": 85.34184508966621}, {"type": "euclidean_f1", "value": 77.76871756856931}, {"type": "euclidean_precision", "value": 74.97855917667239}, {"type": "euclidean_recall", "value": 80.77456113335386}, {"type": "manhattan_accuracy", "value": 88.68319944114566}, {"type": "manhattan_ap", "value": 85.3026464242333}, {"type": "manhattan_f1", "value": 77.66561049296294}, {"type": "manhattan_precision", "value": 74.4665818849795}, {"type": "manhattan_recall", "value": 81.15183246073299}, {"type": "max_accuracy", "value": 88.70066363953894}, {"type": "max_ap", "value": 85.34184508966621}, {"type": "max_f1", "value": 77.76871756856931}]}]}]} | jncraton/GIST-small-Embedding-v0-ct2-int8 | null | [
"sentence-transformers",
"feature-extraction",
"mteb",
"sentence-similarity",
"en",
"arxiv:2402.16829",
"arxiv:2212.09741",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:03:02+00:00 | [
"2402.16829",
"2212.09741"
] | [
"en"
] | TAGS
#sentence-transformers #feature-extraction #mteb #sentence-similarity #en #arxiv-2402.16829 #arxiv-2212.09741 #license-mit #model-index #endpoints_compatible #region-us
| <h1 align="center">GIST small Embedding v0</h1>
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the BAAI/bge-small-en-v1.5 using the MEDI dataset augmented with mined triplets from the MTEB Classification training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: avsolatorio/medi-data-mteb_avs_triplets
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a 'task_type' key, which can be used to select only the mteb classification tasks (prefixed with 'mteb_').
The MEDI Dataset is published in the following paper: One Embedder, Any Task: Instruction-Finetuned Text Embeddings.
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
# Usage
The model can be easily loaded using the Sentence Transformers library.
# Training Parameters
Below are the training parameters used to fine-tune the model:
# Evaluation
The model was evaluated using the MTEB Evaluation suite.
Please cite our work if you use GISTEmbed or the datasets we published in your projects or research.
# Acknowledgements
This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the Knowledge for Change Program (KCP) of the World Bank - RA-P503405-RESE-TF0C3444.
The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. | [
"# Data\n\nThe dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:\n\n- Dataset: avsolatorio/medi-data-mteb_avs_triplets\n- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb\n\nThe dataset contains a 'task_type' key, which can be used to select only the mteb classification tasks (prefixed with 'mteb_').\n\nThe MEDI Dataset is published in the following paper: One Embedder, Any Task: Instruction-Finetuned Text Embeddings.\n\nThe MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.\n\nThe retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.",
"# Usage\n\nThe model can be easily loaded using the Sentence Transformers library.",
"# Training Parameters\n\nBelow are the training parameters used to fine-tune the model:",
"# Evaluation\n\nThe model was evaluated using the MTEB Evaluation suite.\n\n\nPlease cite our work if you use GISTEmbed or the datasets we published in your projects or research.",
"# Acknowledgements\n\nThis work is supported by the \"KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)\" project funded by the Knowledge for Change Program (KCP) of the World Bank - RA-P503405-RESE-TF0C3444.\n\nThe findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent."
] | [
"TAGS\n#sentence-transformers #feature-extraction #mteb #sentence-similarity #en #arxiv-2402.16829 #arxiv-2212.09741 #license-mit #model-index #endpoints_compatible #region-us \n",
"# Data\n\nThe dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:\n\n- Dataset: avsolatorio/medi-data-mteb_avs_triplets\n- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb\n\nThe dataset contains a 'task_type' key, which can be used to select only the mteb classification tasks (prefixed with 'mteb_').\n\nThe MEDI Dataset is published in the following paper: One Embedder, Any Task: Instruction-Finetuned Text Embeddings.\n\nThe MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.\n\nThe retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.",
"# Usage\n\nThe model can be easily loaded using the Sentence Transformers library.",
"# Training Parameters\n\nBelow are the training parameters used to fine-tune the model:",
"# Evaluation\n\nThe model was evaluated using the MTEB Evaluation suite.\n\n\nPlease cite our work if you use GISTEmbed or the datasets we published in your projects or research.",
"# Acknowledgements\n\nThis work is supported by the \"KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)\" project funded by the Knowledge for Change Program (KCP) of the World Bank - RA-P503405-RESE-TF0C3444.\n\nThe findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-7b-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-7b-v0.1/
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-7b-v0.1-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2404.07965",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-20T18:03:15+00:00 | [
"2404.07965"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-7b-v0.1 - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2404.07965 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | joneikholm/llama3_first | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:04:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |

# Llama3-Pirate-Talk-8b-v0.1
Llama3-Pirate-Talk-8b-v0.1, developed by phanerozoic, is the first pirate-themed fine-tune of the Llama3 8b model. This version is crafted to generate authentic pirate-themed content, seamlessly blending historical accuracy with modern fictional representations of pirate speech.
## Developed by:
- phanerozoic
## License:
- cc-by-nc-4.0
## Finetuned from:
- Llama-3-8B
## Version Control:
- Initial release of Llama3-Pirate-Talk-8b-v0.1, marking a new frontier in thematic language model applications.
## Model Overview:
Llama3-Pirate-Talk-8b-v0.1 excels at generating engaging and character-rich pirate dialogue, ideal for entertainment, gaming, and narrative projects. It is designed to perform well in both automated customer interaction platforms and interactive entertainment settings.
## Performance:
The model shows a robust capacity to maintain pirate dialect consistently, adding thematic depth to interactions. While it thrives in generating thematic content, it is less suited for tasks requiring precise technical responses.
## Direct Use:
Optimized for generating content in themed environments, particularly where engagement and character speech are valued over factual accuracy.
## Training Data:
The model was fine-tuned on an abstracted version of "Moby Dick," restructured to enhance pirate vernacular and themes, ensuring rich and varied linguistic inputs.
## Custom Stopping Strings:
To enhance output quality and thematic consistency, custom stopping strings include:
- "}}\n\n\n{{"
- "\n\n\n"
- "\n\nYou:"
- "You:"
- "\n\n"
- "\nYou:"
- "\n"
## Training Hyperparameters and Fine-Tuning Details:
- micro_batch_size: 1
- batch_size: 0
- epochs: 1
- learning_rate: "2e-5"
- lr_scheduler_type: "linear"
- lora_rank: 8
- lora_alpha: 16
- lora_dropout: 0.05
- cutoff_len: 256
- warmup_steps: 8
- optimizer: "adamw_torch"
- grad_accumulation: 1
- train_runtime: 1697.081 seconds
- total_flos: 1.3663655883177984e+16
- train_loss: 1.7511341453808817
## Testing and Evaluation:
During the testing phase, we conducted a series of evaluations to compare Llama3-Pirate-Talk-8b-v0.1 against the base Llama3 model. These tests involved complex navigational and general knowledge questions designed to assess the model's ability to maintain its thematic integrity while responding accurately to technically demanding prompts. The model demonstrated a strong thematic presence with consistent use of pirate vernacular. However, it showed limitations in handling high-precision technical content, which is an expected trade-off given its thematic specialization. These insights have been instrumental in identifying areas for further model refinement.
## Limitations:
Llama3-Pirate-Talk-8b-v0.1 is specifically tailored for pirate-themed content. It may not perform well in non-themed or general language tasks, where neutrality and technical precision are required.
## Compute Infrastructure:
The model was efficiently trained on an RTX 6000 Ada GPU in about half an hour, demonstrating the effective use of resources in creating specialized language models.
## Results:
The model consistently delivers pirate-themed content with a high degree of linguistic coherence and thematic accuracy. However, the depth of responses can vary, suggesting further fine-tuning could enhance its capability to handle complex queries.
## Acknowledgments:
Special thanks to the developers of the base Llama-3 model at Meta, whose open-source architecture was instrumental in developing this thematic model.
## Summary:
Llama3-Pirate-Talk-8b-v0.1 stands out for its unique ability to enrich thematic applications with authentic and engaging pirate dialogue. While it excels in themed content creation, its specialized nature makes it less adaptable to general-purpose tasks, highlighting its role as a niche model in the realm of AI-driven text generation.
| {"language": ["en"], "license": "cc-by-nc-4.0", "widget": [{"text": "Who are you?\n", "example_title": "Pirate Introduction"}]} | phanerozoic/Llama3-Pirate-Talk-8b-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:06:21+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!Llama3-Pirate-Talk-8b-v0.1.URL
# Llama3-Pirate-Talk-8b-v0.1
Llama3-Pirate-Talk-8b-v0.1, developed by phanerozoic, is the first pirate-themed fine-tune of the Llama3 8b model. This version is crafted to generate authentic pirate-themed content, seamlessly blending historical accuracy with modern fictional representations of pirate speech.
## Developed by:
- phanerozoic
## License:
- cc-by-nc-4.0
## Finetuned from:
- Llama-3-8B
## Version Control:
- Initial release of Llama3-Pirate-Talk-8b-v0.1, marking a new frontier in thematic language model applications.
## Model Overview:
Llama3-Pirate-Talk-8b-v0.1 excels at generating engaging and character-rich pirate dialogue, ideal for entertainment, gaming, and narrative projects. It is designed to perform well in both automated customer interaction platforms and interactive entertainment settings.
## Performance:
The model shows a robust capacity to maintain pirate dialect consistently, adding thematic depth to interactions. While it thrives in generating thematic content, it is less suited for tasks requiring precise technical responses.
## Direct Use:
Optimized for generating content in themed environments, particularly where engagement and character speech are valued over factual accuracy.
## Training Data:
The model was fine-tuned on an abstracted version of "Moby Dick," restructured to enhance pirate vernacular and themes, ensuring rich and varied linguistic inputs.
## Custom Stopping Strings:
To enhance output quality and thematic consistency, custom stopping strings include:
- "}}\n\n\n{{"
- "\n\n\n"
- "\n\nYou:"
- "You:"
- "\n\n"
- "\nYou:"
- "\n"
## Training Hyperparameters and Fine-Tuning Details:
- micro_batch_size: 1
- batch_size: 0
- epochs: 1
- learning_rate: "2e-5"
- lr_scheduler_type: "linear"
- lora_rank: 8
- lora_alpha: 16
- lora_dropout: 0.05
- cutoff_len: 256
- warmup_steps: 8
- optimizer: "adamw_torch"
- grad_accumulation: 1
- train_runtime: 1697.081 seconds
- total_flos: 1.3663655883177984e+16
- train_loss: 1.7511341453808817
## Testing and Evaluation:
During the testing phase, we conducted a series of evaluations to compare Llama3-Pirate-Talk-8b-v0.1 against the base Llama3 model. These tests involved complex navigational and general knowledge questions designed to assess the model's ability to maintain its thematic integrity while responding accurately to technically demanding prompts. The model demonstrated a strong thematic presence with consistent use of pirate vernacular. However, it showed limitations in handling high-precision technical content, which is an expected trade-off given its thematic specialization. These insights have been instrumental in identifying areas for further model refinement.
## Limitations:
Llama3-Pirate-Talk-8b-v0.1 is specifically tailored for pirate-themed content. It may not perform well in non-themed or general language tasks, where neutrality and technical precision are required.
## Compute Infrastructure:
The model was efficiently trained on an RTX 6000 Ada GPU in about half an hour, demonstrating the effective use of resources in creating specialized language models.
## Results:
The model consistently delivers pirate-themed content with a high degree of linguistic coherence and thematic accuracy. However, the depth of responses can vary, suggesting further fine-tuning could enhance its capability to handle complex queries.
## Acknowledgments:
Special thanks to the developers of the base Llama-3 model at Meta, whose open-source architecture was instrumental in developing this thematic model.
## Summary:
Llama3-Pirate-Talk-8b-v0.1 stands out for its unique ability to enrich thematic applications with authentic and engaging pirate dialogue. While it excels in themed content creation, its specialized nature makes it less adaptable to general-purpose tasks, highlighting its role as a niche model in the realm of AI-driven text generation.
| [
"# Llama3-Pirate-Talk-8b-v0.1\n\nLlama3-Pirate-Talk-8b-v0.1, developed by phanerozoic, is the first pirate-themed fine-tune of the Llama3 8b model. This version is crafted to generate authentic pirate-themed content, seamlessly blending historical accuracy with modern fictional representations of pirate speech.",
"## Developed by:\n- phanerozoic",
"## License:\n- cc-by-nc-4.0",
"## Finetuned from:\n- Llama-3-8B",
"## Version Control:\n- Initial release of Llama3-Pirate-Talk-8b-v0.1, marking a new frontier in thematic language model applications.",
"## Model Overview:\nLlama3-Pirate-Talk-8b-v0.1 excels at generating engaging and character-rich pirate dialogue, ideal for entertainment, gaming, and narrative projects. It is designed to perform well in both automated customer interaction platforms and interactive entertainment settings.",
"## Performance:\nThe model shows a robust capacity to maintain pirate dialect consistently, adding thematic depth to interactions. While it thrives in generating thematic content, it is less suited for tasks requiring precise technical responses.",
"## Direct Use:\nOptimized for generating content in themed environments, particularly where engagement and character speech are valued over factual accuracy.",
"## Training Data:\nThe model was fine-tuned on an abstracted version of \"Moby Dick,\" restructured to enhance pirate vernacular and themes, ensuring rich and varied linguistic inputs.",
"## Custom Stopping Strings:\nTo enhance output quality and thematic consistency, custom stopping strings include:\n- \"}}\\n\\n\\n{{\"\n- \"\\n\\n\\n\"\n- \"\\n\\nYou:\"\n- \"You:\"\n- \"\\n\\n\"\n- \"\\nYou:\"\n- \"\\n\"",
"## Training Hyperparameters and Fine-Tuning Details:\n- micro_batch_size: 1\n- batch_size: 0\n- epochs: 1\n- learning_rate: \"2e-5\"\n- lr_scheduler_type: \"linear\"\n- lora_rank: 8\n- lora_alpha: 16\n- lora_dropout: 0.05\n- cutoff_len: 256\n- warmup_steps: 8\n- optimizer: \"adamw_torch\"\n- grad_accumulation: 1\n- train_runtime: 1697.081 seconds\n- total_flos: 1.3663655883177984e+16\n- train_loss: 1.7511341453808817",
"## Testing and Evaluation:\nDuring the testing phase, we conducted a series of evaluations to compare Llama3-Pirate-Talk-8b-v0.1 against the base Llama3 model. These tests involved complex navigational and general knowledge questions designed to assess the model's ability to maintain its thematic integrity while responding accurately to technically demanding prompts. The model demonstrated a strong thematic presence with consistent use of pirate vernacular. However, it showed limitations in handling high-precision technical content, which is an expected trade-off given its thematic specialization. These insights have been instrumental in identifying areas for further model refinement.",
"## Limitations:\nLlama3-Pirate-Talk-8b-v0.1 is specifically tailored for pirate-themed content. It may not perform well in non-themed or general language tasks, where neutrality and technical precision are required.",
"## Compute Infrastructure:\nThe model was efficiently trained on an RTX 6000 Ada GPU in about half an hour, demonstrating the effective use of resources in creating specialized language models.",
"## Results:\nThe model consistently delivers pirate-themed content with a high degree of linguistic coherence and thematic accuracy. However, the depth of responses can vary, suggesting further fine-tuning could enhance its capability to handle complex queries.",
"## Acknowledgments:\nSpecial thanks to the developers of the base Llama-3 model at Meta, whose open-source architecture was instrumental in developing this thematic model.",
"## Summary:\nLlama3-Pirate-Talk-8b-v0.1 stands out for its unique ability to enrich thematic applications with authentic and engaging pirate dialogue. While it excels in themed content creation, its specialized nature makes it less adaptable to general-purpose tasks, highlighting its role as a niche model in the realm of AI-driven text generation."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama3-Pirate-Talk-8b-v0.1\n\nLlama3-Pirate-Talk-8b-v0.1, developed by phanerozoic, is the first pirate-themed fine-tune of the Llama3 8b model. This version is crafted to generate authentic pirate-themed content, seamlessly blending historical accuracy with modern fictional representations of pirate speech.",
"## Developed by:\n- phanerozoic",
"## License:\n- cc-by-nc-4.0",
"## Finetuned from:\n- Llama-3-8B",
"## Version Control:\n- Initial release of Llama3-Pirate-Talk-8b-v0.1, marking a new frontier in thematic language model applications.",
"## Model Overview:\nLlama3-Pirate-Talk-8b-v0.1 excels at generating engaging and character-rich pirate dialogue, ideal for entertainment, gaming, and narrative projects. It is designed to perform well in both automated customer interaction platforms and interactive entertainment settings.",
"## Performance:\nThe model shows a robust capacity to maintain pirate dialect consistently, adding thematic depth to interactions. While it thrives in generating thematic content, it is less suited for tasks requiring precise technical responses.",
"## Direct Use:\nOptimized for generating content in themed environments, particularly where engagement and character speech are valued over factual accuracy.",
"## Training Data:\nThe model was fine-tuned on an abstracted version of \"Moby Dick,\" restructured to enhance pirate vernacular and themes, ensuring rich and varied linguistic inputs.",
"## Custom Stopping Strings:\nTo enhance output quality and thematic consistency, custom stopping strings include:\n- \"}}\\n\\n\\n{{\"\n- \"\\n\\n\\n\"\n- \"\\n\\nYou:\"\n- \"You:\"\n- \"\\n\\n\"\n- \"\\nYou:\"\n- \"\\n\"",
"## Training Hyperparameters and Fine-Tuning Details:\n- micro_batch_size: 1\n- batch_size: 0\n- epochs: 1\n- learning_rate: \"2e-5\"\n- lr_scheduler_type: \"linear\"\n- lora_rank: 8\n- lora_alpha: 16\n- lora_dropout: 0.05\n- cutoff_len: 256\n- warmup_steps: 8\n- optimizer: \"adamw_torch\"\n- grad_accumulation: 1\n- train_runtime: 1697.081 seconds\n- total_flos: 1.3663655883177984e+16\n- train_loss: 1.7511341453808817",
"## Testing and Evaluation:\nDuring the testing phase, we conducted a series of evaluations to compare Llama3-Pirate-Talk-8b-v0.1 against the base Llama3 model. These tests involved complex navigational and general knowledge questions designed to assess the model's ability to maintain its thematic integrity while responding accurately to technically demanding prompts. The model demonstrated a strong thematic presence with consistent use of pirate vernacular. However, it showed limitations in handling high-precision technical content, which is an expected trade-off given its thematic specialization. These insights have been instrumental in identifying areas for further model refinement.",
"## Limitations:\nLlama3-Pirate-Talk-8b-v0.1 is specifically tailored for pirate-themed content. It may not perform well in non-themed or general language tasks, where neutrality and technical precision are required.",
"## Compute Infrastructure:\nThe model was efficiently trained on an RTX 6000 Ada GPU in about half an hour, demonstrating the effective use of resources in creating specialized language models.",
"## Results:\nThe model consistently delivers pirate-themed content with a high degree of linguistic coherence and thematic accuracy. However, the depth of responses can vary, suggesting further fine-tuning could enhance its capability to handle complex queries.",
"## Acknowledgments:\nSpecial thanks to the developers of the base Llama-3 model at Meta, whose open-source architecture was instrumental in developing this thematic model.",
"## Summary:\nLlama3-Pirate-Talk-8b-v0.1 stands out for its unique ability to enrich thematic applications with authentic and engaging pirate dialogue. While it excels in themed content creation, its specialized nature makes it less adaptable to general-purpose tasks, highlighting its role as a niche model in the realm of AI-driven text generation."
] |
text-generation | transformers | Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [CohereForAI/c4ai-command-r-plus
](https://huggingface.co/CohereForAI/c4ai-command-r-plus).
For this quantization, we used 1 codebook of 16 bits.
Results:
| Model | Quantization | MMLU (5-shot) | Model size, Gb |
|------|------|-------|------|
|CohereForAI/c4ai-command-r-v01| None |0.7425 | 208 |
| | 1x16 | 0.6795 | 31.9 | | {"library_name": "transformers", "tags": ["cohere", "conversational", "10languages", "text-generation-inference", "Inference Endpoints"]} | ISTA-DASLab/c4ai-command-r-plus-AQLM-2Bit-1x16 | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"10languages",
"text-generation-inference",
"Inference Endpoints",
"arxiv:2401.06118",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:08:41+00:00 | [
"2401.06118"
] | [] | TAGS
#transformers #safetensors #cohere #text-generation #conversational #10languages #text-generation-inference #Inference Endpoints #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #region-us
| Official AQLM quantization of CohereForAI/c4ai-command-r-plus
.
For this quantization, we used 1 codebook of 16 bits.
Results:
| [] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #conversational #10languages #text-generation-inference #Inference Endpoints #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-stable7 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:09:44+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | bashbuster/cheXcare | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-20T18:10:45+00:00 | [
"1910.09700"
] | [] | TAGS
#arxiv-1910.09700 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ehsanl/m3-tx-22-ep2 | null | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:11:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-7b-v0.1 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-7b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [rho-math-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [rho-math-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [rho-math-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [rho-math-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [rho-math-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [rho-math-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [rho-math-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [rho-math-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [rho-math-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [rho-math-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [rho-math-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [rho-math-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [rho-math-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [rho-math-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [rho-math-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [rho-math-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [rho-math-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [rho-math-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [rho-math-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [rho-math-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [rho-math-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf | null | [
"gguf",
"arxiv:2404.07965",
"region:us"
] | null | 2024-04-20T18:12:48+00:00 | [
"2404.07965"
] | [] | TAGS
#gguf #arxiv-2404.07965 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
rho-math-7b-v0.1 - GGUF
* Model creator: URL
* Original model: URL
Name: rho-math-7b-v0.1.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB
Name: rho-math-7b-v0.1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB
Name: rho-math-7b-v0.1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB
Name: rho-math-7b-v0.1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB
Name: rho-math-7b-v0.1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB
Name: rho-math-7b-v0.1.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB
Name: rho-math-7b-v0.1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB
Name: rho-math-7b-v0.1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB
Name: rho-math-7b-v0.1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB
Name: rho-math-7b-v0.1.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB
Name: rho-math-7b-v0.1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB
Name: rho-math-7b-v0.1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB
Name: rho-math-7b-v0.1.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB
Name: rho-math-7b-v0.1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB
Name: rho-math-7b-v0.1.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB
Name: rho-math-7b-v0.1.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB
Name: rho-math-7b-v0.1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB
Name: rho-math-7b-v0.1.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB
Name: rho-math-7b-v0.1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB
Name: rho-math-7b-v0.1.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB
Name: rho-math-7b-v0.1.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB
Original model description:
---------------------------
license: mit
tags:
* nlp
* math
language:
* en
pipeline\_tag: text-generation
---
Rho-1: Not All Tokens Are What You Need
========================================
•

*Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.*
News
----
* [2024/04/12] Rho-Math-v0.1 models released at HuggingFace!
+ Rho-Math-1B and Rho-Math-7B achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3% of the pretraining tokens.
+ Rho-Math-1B-Interpreter is the first 1B LLM that achieves over 40% accuracy on MATH.
+ Rho-Math-7B-Interpreter achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
* [2024/04/11] Rho-1 paper and repo released.
Introduction
------------
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)

*Figure 2:
**Upper:** Even an extensively filtered pretraining corpus contains token-level noise.
**Left:** Previous Causal Language Modeling (CLM) trains on all tokens.
**Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*

*Figure 3: **The pipeline of Selective Language Modeling.**
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*
### Evaluation Results
Base models (Few-shot CoT):
Tool-integrated reasoning (Code Interpreter):
Quick Start
-----------
### Evaluation
Base model few-shot evaluation:
SFT model (code-interpreter) evaluation:
Our reproduced outputs are provided in 'rho-1/URL'.
️ Citation
----------
If you find this repository helpful, please consider citing our paper:
| [
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] | [
"TAGS\n#gguf #arxiv-2404.07965 #region-us \n",
"### Selective Lanugage Modeling (SLM)\n\n\n\n\n \n\n*Figure 2:\n **Upper:** Even an extensively filtered pretraining corpus contains token-level noise.\n **Left:** Previous Causal Language Modeling (CLM) trains on all tokens.\n **Right:** Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.*\n\n\n\n\n\n \n\n*Figure 3: **The pipeline of Selective Language Modeling.**\n SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.\n It involves three steps:\n (Step 1) Initially, train a reference model on high-quality data.\n (Step 2) Then, score each token's loss in a corpus using the reference model.\n (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.*",
"### Evaluation Results\n\n\nBase models (Few-shot CoT):\n\n\n\nTool-integrated reasoning (Code Interpreter):\n\n\n\nQuick Start\n-----------",
"### Evaluation\n\n\nBase model few-shot evaluation:\n\n\nSFT model (code-interpreter) evaluation:\n\n\nOur reproduced outputs are provided in 'rho-1/URL'.\n\n\n️ Citation\n----------\n\n\nIf you find this repository helpful, please consider citing our paper:"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T18:14:13+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Zephyr-7b-Unsloth-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zephyr-7b-Unsloth-DPO-GGUF/resolve/main/Zephyr-7b-Unsloth-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "NotAiLOL/Zephyr-7b-Unsloth-DPO", "quantized_by": "mradermacher"} | mradermacher/Zephyr-7b-Unsloth-DPO-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"dpo",
"en",
"base_model:NotAiLOL/Zephyr-7b-Unsloth-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:14:50+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #dpo #en #base_model-NotAiLOL/Zephyr-7b-Unsloth-DPO #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #dpo #en #base_model-NotAiLOL/Zephyr-7b-Unsloth-DPO #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Sanaullah06/WestStarling-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "senseable/WestLake-7B-v2", "Nexusflow/Starling-LM-7B-beta"], "base_model": "Sanaullah06/WestStarling-7B-slerp", "quantized_by": "mradermacher"} | mradermacher/WestStarling-7B-slerp-GGUF | null | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"senseable/WestLake-7B-v2",
"Nexusflow/Starling-LM-7B-beta",
"en",
"base_model:Sanaullah06/WestStarling-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:16:12+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #merge #mergekit #lazymergekit #senseable/WestLake-7B-v2 #Nexusflow/Starling-LM-7B-beta #en #base_model-Sanaullah06/WestStarling-7B-slerp #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #senseable/WestLake-7B-v2 #Nexusflow/Starling-LM-7B-beta #en #base_model-Sanaullah06/WestStarling-7B-slerp #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Balab2021/llama3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:16:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Un farfadet rempli de malice | {} | Sinexo/Farfadet-2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:16:45+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Un farfadet rempli de malice | [] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "unsloth/llama-3-8b"} | Fredithefish/Llama3RP-chkpt-5000 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b",
"region:us"
] | null | 2024-04-20T18:16:55+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
token-classification | transformers |
# bert-mapa-german
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the MAPA german dataset.
It's purpose is to discern private information within German texts.
It achieves the following results on the test set:
| Category | Precision | Recall | F1 | Number |
|---------------|------------|------------|------------|--------|
| Address | 0.5882 | 0.6667 | 0.625 | 15 |
| Age | 0.0 | 0.0 | 0.0 | 3 |
| Amount | 1.0 | 1.0 | 1.0 | 1 |
| Date | 0.9455 | 0.9455 | 0.9455 | 55 |
| Name | 0.7 | 0.9545 | 0.8077 | 22 |
| Organisation | 0.5405 | 0.6452 | 0.5882 | 31 |
| Person | 0.5385 | 0.5 | 0.5185 | 14 |
| Role | 0.0 | 0.0 | 0.0 | 1 |
| Overall | 0.7255 | 0.7817 | 0.7525 | |
- Loss: 0.0325
- Overall Accuracy: 0.9912
## Intended uses & limitations
This model is engineered for the purpose of discerning private information within German texts. Its training corpus comprises only 1744 example sentences, thereby leading to a higher frequency of errors in its predictions.
## Training and evaluation data
Random split of the MAPA german dataset into 80% train, 10% valdiation and 10% test.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 1.0 | 218 | 0.0607 | 0.6527 | 0.7786 | 0.7101 | 0.9859 |
| No log | 2.0 | 436 | 0.0479 | 0.7355 | 0.8143 | 0.7729 | 0.9896 |
| 0.116 | 3.0 | 654 | 0.0414 | 0.7712 | 0.8429 | 0.8055 | 0.9908 |
| 0.116 | 4.0 | 872 | 0.0421 | 0.7857 | 0.8643 | 0.8231 | 0.9917 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["de"], "license": "mit", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-base-german-cased", "model-index": [{"name": "bert-mapa-german", "results": []}]} | jbroermann/bert-mapa-german | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"de",
"base_model:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-20T18:18:37+00:00 | [] | [
"de"
] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #de #base_model-google-bert/bert-base-german-cased #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| bert-mapa-german
================
This model is a fine-tuned version of google-bert/bert-base-german-cased on the MAPA german dataset.
It's purpose is to discern private information within German texts.
It achieves the following results on the test set:
* Loss: 0.0325
* Overall Accuracy: 0.9912
Intended uses & limitations
---------------------------
This model is engineered for the purpose of discerning private information within German texts. Its training corpus comprises only 1744 example sentences, thereby leading to a higher frequency of errors in its predictions.
Training and evaluation data
----------------------------
Random split of the MAPA german dataset into 80% train, 10% valdiation and 10% test.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.1.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #de #base_model-google-bert/bert-base-german-cased #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MODEL_EPOCHS_C
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-hf", "model-index": [{"name": "MODEL_EPOCHS_C", "results": []}]} | LLMLover/MODEL_EPOCHS_C | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-20T18:18:46+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us
|
# MODEL_EPOCHS_C
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| [
"# MODEL_EPOCHS_C\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us \n",
"# MODEL_EPOCHS_C\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
46, 47번 레이어만 튜닝해서 만든 이브이 Inst 모델입니다. 중후반 레이어를 raw 학습에 이용하여 Inst 모델을 만들 수 있습니다.
자세한 방법론은 추후 작성하겠습니다. | {"license": "apache-2.0"} | maywell/EEVE-Korean-Instruct-10.8B-Var | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:19:21+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
46, 47번 레이어만 튜닝해서 만든 이브이 Inst 모델입니다. 중후반 레이어를 raw 학습에 이용하여 Inst 모델을 만들 수 있습니다.
자세한 방법론은 추후 작성하겠습니다. | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
question-answering | transformers |
A simple QA model. | {"license": "mit"} | jwt2706/basic-qa-66M-v0.1 | null | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:19:52+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #question-answering #license-mit #endpoints_compatible #region-us
|
A simple QA model. | [] | [
"TAGS\n#transformers #safetensors #distilbert #question-answering #license-mit #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | davidataka/bank_model | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:21:36+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Accuracy: 0.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2267 | 1.0 | 1563 | 0.2067 | 0.9195 |
| 0.1476 | 2.0 | 3126 | 0.2308 | 0.9317 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]} | JasssZ/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:23:58+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2308
* Accuracy: 0.9317
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CohereForAI/c4ai-command-r-plus
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q2_K.gguf) | Q2_K | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.IQ3_XS.gguf) | IQ3_XS | 43.7 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q3_K_S.gguf) | Q3_K_S | 46.0 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.IQ3_S.gguf) | IQ3_S | 46.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.IQ3_M.gguf) | IQ3_M | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q3_K_M.gguf.part2of2) | Q3_K_M | 51.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q3_K_L.gguf.part2of2) | Q3_K_L | 55.5 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.8 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q4_K_M.gguf.part2of2) | Q4_K_M | 62.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q5_K_S.gguf.part2of2) | Q5_K_S | 71.9 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q5_K_M.gguf.part2of2) | Q5_K_M | 73.7 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q6_K.gguf.part2of2) | Q6_K | 85.3 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF/resolve/main/c4ai-command-r-plus.Q8_0.gguf.part3of3) | Q8_0 | 110.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "CohereForAI/c4ai-command-r-plus", "quantized_by": "mradermacher"} | mradermacher/c4ai-command-r-plus-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:CohereForAI/c4ai-command-r-plus",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:24:41+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-CohereForAI/c4ai-command-r-plus #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-CohereForAI/c4ai-command-r-plus #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-chat-hf", "model-index": [{"name": "results", "results": []}]} | hamzah11/results | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-20T18:25:07+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
|
# results
This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# results\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n",
"# results\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | 
6.0bpw version of : https://huggingface.co/Nitral-AI/Poppy_Porpoise-v0.2-L3-8B
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
# Quants Available Here: Thanks to Lewdiculus <3 https://huggingface.co/Lewdiculous/Poppy_Porpoise-v0.2-L3-8B-GGUF-IQ-Imatrix | {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nitral-AI/Poppy_Porpoise-L3-8B", "Undi95/Llama-3-Unholy-8B"]} | Natkituwu/Poppy_Porpoise-v0.2-L3-8B-6.0bpw-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nitral-AI/Poppy_Porpoise-L3-8B",
"base_model:Undi95/Llama-3-Unholy-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-20T18:28:58+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/Poppy_Porpoise-L3-8B #base_model-Undi95/Llama-3-Unholy-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
| !image/jpeg
6.0bpw version of : URL
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
# Quants Available Here: Thanks to Lewdiculus <3 URL | [
"# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.",
"# Quants Available Here: Thanks to Lewdiculus <3 URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/Poppy_Porpoise-L3-8B #base_model-Undi95/Llama-3-Unholy-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.",
"# Quants Available Here: Thanks to Lewdiculus <3 URL"
] |
text-to-speech | null |
<h1 align="center"><a href="https://github.com/RVC-Boss/GPT-SoVITS">GPT-SoVITS</a> models</h1>
<p align="center">使用 <a href="https://github.com/RVC-Boss/GPT-SoVITS">GPT-SoVITS</a> 训练的模型</p>
<p align = 'center'>
<img width='150' src='https://cdn-uploads.huggingface.co/production/uploads/614bdf8c74bc496810b80f78/RL3QuK4gcv59r9fDtQNCk.webp'>
</p>
## [English][Sangonomiya Kokomi ([珊瑚宮心海])](https://genshin-impact.fandom.com/wiki/Category:Sangonomiya_Kokomi)
Trained on ~1hr of english voiceover data.
使用约 1 小时的英语配音数据进行训练。
<br>
<center>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/english1.wav"></audio>
<p>The soft murmur of the river was calming, as the sun dipped below the horizon.</p>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/english2.wav"></audio>
<p>But to be honest, I have to tell you something. I've liked you for a long time. </p>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/japanese1.wav"></audio>
<p>えっと、そのアニメは本当に感動的だったよ。涙が止まらなかった。</p>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/japanese2.wav"></audio>
<p>なんか、好きになっちゃったみたい</p>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/chinese1.wav"></audio>
<p>今天的天气非常好,阳光明媚,非常适合出去郊游。</p>
<audio controls src="https://huggingface.co/xiaoheiqaq/GPT-Sovits-models/resolve/main/readme.assets/kokomi/chinese2.wav"></audio>
<p>我昨天去图书馆借了几本书,计划这个周末好好阅读一下。</p>
</center>
<br>
# Credits
* Voice actor :Sangonomiya Kokomi (cv. Risa Mei)
* [GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)
---
| {"language": ["zh", "ja", "en"], "license": "gpl-3.0", "tags": ["GPT-SoVITS", "Sangonomiya Kokomi"], "pipeline_tag": "text-to-speech"} | xiaoheiqaq/GPT-Sovits-models | null | [
"GPT-SoVITS",
"Sangonomiya Kokomi",
"text-to-speech",
"zh",
"ja",
"en",
"license:gpl-3.0",
"region:us"
] | null | 2024-04-20T18:34:10+00:00 | [] | [
"zh",
"ja",
"en"
] | TAGS
#GPT-SoVITS #Sangonomiya Kokomi #text-to-speech #zh #ja #en #license-gpl-3.0 #region-us
|
<h1 align="center"><a href="URL models</h1>
<p align="center">使用 <a href="URL 训练的模型</p>
<p align = 'center'>
<img width='150' src='URL
</p>
## [English][Sangonomiya Kokomi ([珊瑚宮心海])](URL
Trained on ~1hr of english voiceover data.
使用约 1 小时的英语配音数据进行训练。
<br>
<center>
<audio controls src="URL
<p>The soft murmur of the river was calming, as the sun dipped below the horizon.</p>
<audio controls src="URL
<p>But to be honest, I have to tell you something. I've liked you for a long time. </p>
<audio controls src="URL
<p>えっと、そのアニメは本当に感動的だったよ。涙が止まらなかった。</p>
<audio controls src="URL
<p>なんか、好きになっちゃったみたい</p>
<audio controls src="URL
<p>今天的天气非常好,阳光明媚,非常适合出去郊游。</p>
<audio controls src="URL
<p>我昨天去图书馆借了几本书,计划这个周末好好阅读一下。</p>
</center>
<br>
# Credits
* Voice actor :Sangonomiya Kokomi (cv. Risa Mei)
* GPT-SoVITS
---
| [
"## [English][Sangonomiya Kokomi ([珊瑚宮心海])](URL \n\n\nTrained on ~1hr of english voiceover data.\n\n使用约 1 小时的英语配音数据进行训练。\n\n<br>\n<center>\n <audio controls src=\"URL\n <p>The soft murmur of the river was calming, as the sun dipped below the horizon.</p>\n <audio controls src=\"URL\n <p>But to be honest, I have to tell you something. I've liked you for a long time. </p>\n <audio controls src=\"URL\n <p>えっと、そのアニメは本当に感動的だったよ。涙が止まらなかった。</p>\n <audio controls src=\"URL\n <p>なんか、好きになっちゃったみたい</p>\n <audio controls src=\"URL\n <p>今天的天气非常好,阳光明媚,非常适合出去郊游。</p>\n <audio controls src=\"URL\n <p>我昨天去图书馆借了几本书,计划这个周末好好阅读一下。</p>\n</center>\n\n<br>",
"# Credits\n* Voice actor :Sangonomiya Kokomi (cv. Risa Mei)\n* GPT-SoVITS\n\n---"
] | [
"TAGS\n#GPT-SoVITS #Sangonomiya Kokomi #text-to-speech #zh #ja #en #license-gpl-3.0 #region-us \n",
"## [English][Sangonomiya Kokomi ([珊瑚宮心海])](URL \n\n\nTrained on ~1hr of english voiceover data.\n\n使用约 1 小时的英语配音数据进行训练。\n\n<br>\n<center>\n <audio controls src=\"URL\n <p>The soft murmur of the river was calming, as the sun dipped below the horizon.</p>\n <audio controls src=\"URL\n <p>But to be honest, I have to tell you something. I've liked you for a long time. </p>\n <audio controls src=\"URL\n <p>えっと、そのアニメは本当に感動的だったよ。涙が止まらなかった。</p>\n <audio controls src=\"URL\n <p>なんか、好きになっちゃったみたい</p>\n <audio controls src=\"URL\n <p>今天的天气非常好,阳光明媚,非常适合出去郊游。</p>\n <audio controls src=\"URL\n <p>我昨天去图书馆借了几本书,计划这个周末好好阅读一下。</p>\n</center>\n\n<br>",
"# Credits\n* Voice actor :Sangonomiya Kokomi (cv. Risa Mei)\n* GPT-SoVITS\n\n---"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T18:34:25+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_2_help_class_adp_lr_0003_S_4' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers | # Locutusque/Llama-3-Orca-1.0-8B AWQ
- Model creator: [Locutusque](https://huggingface.co/Locutusque)
- Original model: [Llama-3-Orca-1.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B)

## Model Summary
I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing.
This model has great performance in writing and coding.
Conversational AI.
| {"license": "other", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "datasets": ["Open-Orca/SlimOrca-Dedup", "jondurbin/airoboros-3.2", "microsoft/orca-math-word-problems-200k", "m-a-p/Code-Feedback", "MaziyarPanahi/WizardLM_evol_instruct_V2_196k"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Llama-3-Orca-1.0-8B-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:35:13+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #dataset-Open-Orca/SlimOrca-Dedup #dataset-jondurbin/airoboros-3.2 #dataset-microsoft/orca-math-word-problems-200k #dataset-m-a-p/Code-Feedback #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #license-other #text-generation-inference #region-us
| # Locutusque/Llama-3-Orca-1.0-8B AWQ
- Model creator: Locutusque
- Original model: Llama-3-Orca-1.0-8B
!image/png
## Model Summary
I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing.
This model has great performance in writing and coding.
Conversational AI.
| [
"# Locutusque/Llama-3-Orca-1.0-8B AWQ\n\n- Model creator: Locutusque\n- Original model: Llama-3-Orca-1.0-8B\n\n!image/png",
"## Model Summary\n\nI fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing.\n\nThis model has great performance in writing and coding.\n\nConversational AI."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #dataset-Open-Orca/SlimOrca-Dedup #dataset-jondurbin/airoboros-3.2 #dataset-microsoft/orca-math-word-problems-200k #dataset-m-a-p/Code-Feedback #dataset-MaziyarPanahi/WizardLM_evol_instruct_V2_196k #license-other #text-generation-inference #region-us \n",
"# Locutusque/Llama-3-Orca-1.0-8B AWQ\n\n- Model creator: Locutusque\n- Original model: Llama-3-Orca-1.0-8B\n\n!image/png",
"## Model Summary\n\nI fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing.\n\nThis model has great performance in writing and coding.\n\nConversational AI."
] |
text-generation | transformers |
**Finetuned over openhermes-2.5 dataset for 1 epoch**
- Follows the llama-3 instruction format described in https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3
**Training Details**
https://medium.com/@sumandas0/fine-tune-llama3-on-million-scale-dataset-in-consumer-gpu-using-qlora-deepspeed-3ae8ad75299a | {"license": "llama2", "datasets": ["teknium/OpenHermes-2.5"]} | sumandas/llama3-openhermes-2.5 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:teknium/OpenHermes-2.5",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:35:19+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #dataset-teknium/OpenHermes-2.5 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Finetuned over openhermes-2.5 dataset for 1 epoch
- Follows the llama-3 instruction format described in URL
Training Details
URL | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-teknium/OpenHermes-2.5 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:35:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | selimyagci/bert-hate-en | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:35:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | 
6.5bpw version of : https://huggingface.co/Nitral-AI/Poppy_Porpoise-v0.2-L3-8B
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
# Quants Available Here: Thanks to Lewdiculus <3 https://huggingface.co/Lewdiculous/Poppy_Porpoise-v0.2-L3-8B-GGUF-IQ-Imatrix | {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nitral-AI/Poppy_Porpoise-L3-8B", "Undi95/Llama-3-Unholy-8B"]} | Natkituwu/Poppy_Porpoise-v0.2-L3-8B-6.5bpw-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nitral-AI/Poppy_Porpoise-L3-8B",
"base_model:Undi95/Llama-3-Unholy-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:36:02+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/Poppy_Porpoise-L3-8B #base_model-Undi95/Llama-3-Unholy-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| !image/jpeg
6.5bpw version of : URL
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
# Quants Available Here: Thanks to Lewdiculus <3 URL | [
"# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.",
"# Quants Available Here: Thanks to Lewdiculus <3 URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/Poppy_Porpoise-L3-8B #base_model-Undi95/Llama-3-Unholy-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.",
"# Quants Available Here: Thanks to Lewdiculus <3 URL"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4399 | 0.64 | 100 | 1.5428 |
| 1.4239 | 1.27 | 200 | 1.2347 |
| 1.2024 | 1.91 | 300 | 1.0768 |
| 1.099 | 2.55 | 400 | 1.0269 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]} | AnettSand/detr | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:39:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
| detr
====
This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0269
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tomaszki/llama-1-a | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:40:51+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T18:42:40+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_3_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | domenicrosati/lens-loss-minimality_lr_2e-5_attack_meta-llama_Llama-2-7b-chat-hf_1_num_layers_32_3e-5_1k | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:44:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
sentence-similarity | sentence-transformers |
# atasoglu/mbert-base-cased-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) and fine-tuned on these datasets:
- [nli_tr](https://huggingface.co/datasets/nli_tr)
- [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('atasoglu/mbert-base-cased-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr')
model = AutoModel.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
```txt
Cosine-Similarity : Pearson: 0.8152 Spearman: 0.8130
Manhattan-Distance: Pearson: 0.8049 Spearman: 0.8128
Euclidean-Distance: Pearson: 0.8049 Spearman: 0.8126
Dot-Product-Similarity: Pearson: 0.7878 Spearman: 0.7822
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 18,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 108,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"language": ["tr"], "license": "apache-2.0", "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["nli_tr", "emrecan/stsb-mt-turkish"], "pipeline_tag": "sentence-similarity"} | atasoglu/mbert-base-cased-nli-stsb-tr | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:44:50+00:00 | [] | [
"tr"
] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #license-apache-2.0 #endpoints_compatible #region-us
|
# atasoglu/mbert-base-cased-nli-stsb-tr
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from google-bert/bert-base-multilingual-cased and fine-tuned on these datasets:
- nli_tr
- emrecan/stsb-mt-turkish
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
Achieved results on the STS-b test split are given below:
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 180 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# atasoglu/mbert-base-cased-nli-stsb-tr\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis model was adapted from google-bert/bert-base-multilingual-cased and fine-tuned on these datasets:\n- nli_tr\n- emrecan/stsb-mt-turkish",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nAchieved results on the STS-b test split are given below:",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 180 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #license-apache-2.0 #endpoints_compatible #region-us \n",
"# atasoglu/mbert-base-cased-nli-stsb-tr\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis model was adapted from google-bert/bert-base-multilingual-cased and fine-tuned on these datasets:\n- nli_tr\n- emrecan/stsb-mt-turkish",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nAchieved results on the STS-b test split are given below:",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 180 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Llama-3-8B-v0.3-GGUF/resolve/main/Configurable-Llama-3-8B-v0.3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["safety"], "datasets": ["vicgalle/configurable-system-prompt-multitask"], "base_model": "vicgalle/Configurable-Llama-3-8B-v0.3", "quantized_by": "mradermacher"} | mradermacher/Configurable-Llama-3-8B-v0.3-GGUF | null | [
"transformers",
"gguf",
"safety",
"en",
"dataset:vicgalle/configurable-system-prompt-multitask",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-20T18:47:27+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #safety #en #dataset-vicgalle/configurable-system-prompt-multitask #base_model-vicgalle/Configurable-Llama-3-8B-v0.3 #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #safety #en #dataset-vicgalle/configurable-system-prompt-multitask #base_model-vicgalle/Configurable-Llama-3-8B-v0.3 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | adapter-transformers |
# Adapter `BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_MICRO_helpfulness_dataset"]} | BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0 | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset",
"region:us"
] | null | 2024-04-20T18:50:20+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us
|
# Adapter 'BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_MICRO_helpfulness_dataset #region-us \n",
"# Adapter 'BigTMiami/micro_par_bn_v_4_help_class_adp_lr_0003_S_0' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_MICRO_helpfulness_dataset dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_ablation_declr_4iters_256batch_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_ablation_declr_4iters_256batch_iter_1", "results": []}]} | ZhangShenao/0.0_ablation_declr_4iters_256batch_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T18:50:46+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_ablation_declr_4iters_256batch_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.0_ablation_declr_4iters_256batch_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_ablation_declr_4iters_256batch_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.