pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "JiazhenLiu01/llamma2-chat-test1"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map={"": 0},
torch_dtype='auto',
#offload_folder="offload"
).eval()
input_text = "### Human: And both [redacted] and you are keen for next year?### Assistant:"
# Encode the input text into tokens
input_ids = tokenizer.encode(input_text, return_tensors="pt")
input_ids = input_ids.to("cuda")
# Use the trained model for dialogue inference
output = model.generate(input_ids, max_length=200, repetition_penalty=2.0)
# Decode the model output tokens into text
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
response = output_text
print("Original Generated response:",output_text)
# Find the position after "### Assistant:"
assistant_index = response.find("### Assistant:")
# Extract the response after "### Assistant:" until the last sentence with a period
assistant_response = response[assistant_index + len("### Assistant:"):]
print("Generated response:",assistant_response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | JiazhenLiu01/llamma2-chat-test1 | null | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:47:49+00:00 | [] | [] | TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lora-food101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1255
- Accuracy: 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.5628 | 0.9 |
| 2.1887 | 2.0 | 18 | 0.2129 | 0.948 |
| 0.3516 | 3.0 | 27 | 0.1464 | 0.952 |
| 0.2151 | 4.0 | 36 | 0.1255 | 0.964 |
| 0.183 | 5.0 | 45 | 0.1249 | 0.958 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-patch16-224-in21k-finetuned-lora-food101", "results": []}]} | baraah/vit-base-patch16-224-in21k-finetuned-lora-food101 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T14:49:10+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us
| vit-base-patch16-224-in21k-finetuned-lora-food101
=================================================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1255
* Accuracy: 0.964
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 512
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo openlm-research/open_llama_3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/openlm-research-open_llama_3b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/openlm-research-open_llama_3b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/openlm-research-open_llama_3b-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:49:27+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo openlm-research/open_llama_3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openlm-research/open_llama_3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openlm-research/open_llama_3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-to-image | diffusers | # imagen
<Gallery />
## Model description
A cool watercolor style
## Download model
Weights for this model are available in Safetensors format.
[Download](/chickenthi/imagen/tree/main) them in the Files & versions tab.
| {"license": "cc", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/imagen.png"}}], "base_model": "runwayml/stable-diffusion-v1-5"} | chickenthi/imagen | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:cc",
"region:us"
] | null | 2024-04-16T14:49:28+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #license-cc #region-us
| # imagen
<Gallery />
## Model description
A cool watercolor style
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# imagen\n\n<Gallery />",
"## Model description \n\nA cool watercolor style",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #license-cc #region-us \n",
"# imagen\n\n<Gallery />",
"## Model description \n\nA cool watercolor style",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | ikerm11/FTLLAMATEST002 | null | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-16T14:50:03+00:00 | [] | [] | TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ScreamProx/t5-russian-spell | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:50:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/facebook-opt-1.3b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/facebook-opt-1.3b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/facebook-opt-1.3b-HQQ-2bit-smashed | null | [
"transformers",
"opt",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:51:53+00:00 | [] | [] | TAGS
#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/facebook-opt-350m-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/facebook-opt-350m-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/facebook-opt-350m-HQQ-2bit-smashed | null | [
"transformers",
"opt",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:52:02+00:00 | [] | [] | TAGS
#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 145383 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | alexjones1925/all-MiniLM-L12-v2-ibotta-search-clicks-dev-v1 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:55:11+00:00 | [] | [] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 145383 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 145383 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 145383 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | HongxuanLi/TinyLLaMA-RS | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-16T14:55:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo openlm-research/open_llama_3b_v2 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/openlm-research-open_llama_3b_v2-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/openlm-research-open_llama_3b_v2-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_3b_v2")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b_v2 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/openlm-research-open_llama_3b_v2-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:55:34+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo openlm-research/open_llama_3b_v2 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b_v2 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openlm-research/open_llama_3b_v2 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b_v2 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openlm-research/open_llama_3b_v2 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openlm-research/open_llama_3b_v2 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "355.80 +/- 161.58", "name": "mean_reward", "verified": false}]}]}]} | minindu-liya99/Reinforce-CartPole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T14:55:50+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-2024-04-16-16-55-6wLni
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1_5-2024-04-16-16-55-6wLni", "results": []}]} | frenkd/phi-1_5-2024-04-16-16-55-6wLni | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-04-16T14:55:57+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us
|
# phi-1_5-2024-04-16-16-55-6wLni
This model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-1_5-2024-04-16-16-55-6wLni\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us \n",
"# phi-1_5-2024-04-16-16-55-6wLni\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
sentence-similarity | sentence-transformers |
# DiTy/bi-encoder-russian-msmarco
This is a [sentence-transformers](https://www.SBERT.net) model based on a pre-trained [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) and finetuned with [MS-MARCO Russian passage ranking dataset](https://huggingface.co/datasets/unicamp-dl/mmarco):
It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for asymmetric semantic search in the Russian language.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
sentences = [
'какое состояние может определить тест с физической нагрузкой',
'Тест с физической нагрузкой разработан, чтобы выяснить, содержат ли одна или несколько коронарных артерий, питающих сердце, жировые отложения (бляшки), которые блокируют кровеносный сосуд на 70% или более. Для подтверждения результата часто требуется дополнительное тестирование. Результат испытаний.',
'Тест направлен на то, чтобы выяснить, не получает ли какой-либо участок сердечной мышцы достаточный кровоток во время тренировки. Он похож на тест с физической нагрузкой, фармакологический или химический стресс-тест. Он также известен при стресс-тесте таллием, сканировании перфузии миокарда или радионуклидном тесте.'
]
model = SentenceTransformer('DiTy/bi-encoder-russian-msmarco')
embeddings = model.encode(sentences)
results = util.semantic_search(embeddings[0], embeddings[1:])[0]
print(f"Sentence similarity: {results}")
# `Sentence similarity: [{'corpus_id': 0, 'score': 0.8545001149177551}, {'corpus_id': 1, 'score': 0.023047829046845436}]`
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = [
'красный плоский лишай вызван стрессом',
'В большинстве случаев причину появления красного плоского лишая невозможно. Это не вызвано стрессом, но иногда эмоциональный стресс усугубляет ситуацию. Известно, что это заболевание возникает после контакта с определенными химическими веществами, такими как те, которые используются для проявления цветных фотографий. У некоторых людей определенные лекарства вызывают красный плоский лишай. Эти препараты включают лекарства от высокого кровяного давления, болезней сердца, диабета, артрита и малярии, антибиотики, нестероидные противовоспалительные обезболивающие и т. Д.',
'К сожалению для работодателей, в разных штатах страны есть несколько дел, по которым суды установили, что стресс, вызванный работой, может быть основанием для увольнения с работы, если стресс достигает уровня серьезного состояния здоровья, которое вызывает они не могут выполнять свою работу.',
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DiTy/bi-encoder-russian-msmarco')
model = AutoModel.from_pretrained('DiTy/bi-encoder-russian-msmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, max_length=512, padding='max_length', truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1989041 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 250000,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
``` | {"language": ["ru"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "rubert", "bi-encoder", "msmarco"], "datasets": ["unicamp-dl/mmarco"], "pipeline_tag": "sentence-similarity", "base_model": "DeepPavlov/rubert-base-cased", "widget": [{"source_sentence": "\u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u0438\u0435 \u043d\u043e\u0432\u0438\u0447\u043a\u0430", "sentences": ["\u0427\u0430\u0441\u0442\u044c \u043f\u044f\u0442\u0430\u044f: \u041f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435 \u0445\u0443\u0434\u043e\u0436\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0433\u043e \u043c\u0443\u0437\u0435\u044f. \u0414\u043b\u044f \u043d\u043e\u0432\u0438\u0447\u043a\u0430 \u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435 \u0445\u0443\u0434\u043e\u0436\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0433\u043e \u043c\u0443\u0437\u0435\u044f \u043c\u043e\u0436\u0435\u0442 \u0441\u0442\u0430\u0442\u044c \u043d\u0435\u043f\u0440\u043e\u0441\u0442\u043e\u0439 \u0437\u0430\u0434\u0430\u0447\u0435\u0439. \u0411\u043e\u043b\u044c\u0448\u0438\u043d\u0441\u0442\u0432\u043e \u043c\u0443\u0437\u0435\u0435\u0432 \u043e\u0447\u0435\u043d\u044c \u0431\u043e\u043b\u044c\u0448\u0438\u0435 \u0438 \u0442\u0440\u0435\u0431\u0443\u044e\u0442 \u0432\u044b\u043d\u043e\u0441\u043b\u0438\u0432\u043e\u0441\u0442\u0438 \u0438 \u0445\u043e\u0440\u043e\u0448\u0435\u0433\u043e \u0447\u0443\u0432\u0441\u0442\u0432\u0430 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u044f. \u041f\u043e\u0442\u0440\u0430\u0442\u044c\u0442\u0435 \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0432\u0440\u0435\u043c\u044f \u043d\u0430 \u0442\u043e, \u0447\u0442\u043e\u0431\u044b \u0443\u0437\u043d\u0430\u0442\u044c \u0431\u043e\u043b\u044c\u0448\u0435 \u043e \u043c\u0443\u0437\u0435\u0435, \u043f\u0440\u0435\u0436\u0434\u0435 \u0447\u0435\u043c \u043e\u0442\u043f\u0440\u0430\u0432\u0438\u0442\u044c\u0441\u044f \u0432 \u043f\u0443\u0442\u044c, - \u043b\u0443\u0447\u0448\u0438\u0439 \u0441\u043f\u043e\u0441\u043e\u0431 \u043e\u0431\u0435\u0441\u043f\u0435\u0447\u0438\u0442\u044c \u0431\u043e\u043b\u0435\u0435 \u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0442\u0438\u0432\u043d\u043e\u0435 \u0438 \u043f\u0440\u0438\u044f\u0442\u043d\u043e\u0435 \u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435. \u041f\u0420\u0415\u0416\u0414\u0415 \u0427\u0415\u041c \u0422\u042b \u0423\u0419\u0414\u0415\u0428\u042c.", "\u041e\u043f\u0440\u0435\u0434\u0435\u043b\u0435\u043d\u0438\u0435 \u043d\u043e\u0432\u0438\u0447\u043a\u0430 - \u044d\u0442\u043e \u043d\u043e\u0432\u0438\u0447\u043e\u043a \u0438\u043b\u0438 \u0447\u0435\u043b\u043e\u0432\u0435\u043a \u0432 \u043d\u0430\u0447\u0430\u043b\u0435 \u0447\u0435\u0433\u043e-\u043b\u0438\u0431\u043e."], "example_title": "Example 1"}, {"source_sentence": "\u043a\u0430\u043a\u043e\u0435 \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u043c\u043e\u0436\u0435\u0442 \u043e\u043f\u0440\u0435\u0434\u0435\u043b\u0438\u0442\u044c \u0442\u0435\u0441\u0442 \u0441 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u043d\u0430\u0433\u0440\u0443\u0437\u043a\u043e\u0439", "sentences": ["\u0422\u0435\u0441\u0442 \u0441 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u043d\u0430\u0433\u0440\u0443\u0437\u043a\u043e\u0439 \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0430\u043d, \u0447\u0442\u043e\u0431\u044b \u0432\u044b\u044f\u0441\u043d\u0438\u0442\u044c, \u0441\u043e\u0434\u0435\u0440\u0436\u0430\u0442 \u043b\u0438 \u043e\u0434\u043d\u0430 \u0438\u043b\u0438 \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u043e \u043a\u043e\u0440\u043e\u043d\u0430\u0440\u043d\u044b\u0445 \u0430\u0440\u0442\u0435\u0440\u0438\u0439, \u043f\u0438\u0442\u0430\u044e\u0449\u0438\u0445 \u0441\u0435\u0440\u0434\u0446\u0435, \u0436\u0438\u0440\u043e\u0432\u044b\u0435 \u043e\u0442\u043b\u043e\u0436\u0435\u043d\u0438\u044f (\u0431\u043b\u044f\u0448\u043a\u0438), \u043a\u043e\u0442\u043e\u0440\u044b\u0435 \u0431\u043b\u043e\u043a\u0438\u0440\u0443\u044e\u0442 \u043a\u0440\u043e\u0432\u0435\u043d\u043e\u0441\u043d\u044b\u0439 \u0441\u043e\u0441\u0443\u0434 \u043d\u0430 70% \u0438\u043b\u0438 \u0431\u043e\u043b\u0435\u0435. \u0414\u043b\u044f \u043f\u043e\u0434\u0442\u0432\u0435\u0440\u0436\u0434\u0435\u043d\u0438\u044f \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u0430 \u0447\u0430\u0441\u0442\u043e \u0442\u0440\u0435\u0431\u0443\u0435\u0442\u0441\u044f \u0434\u043e\u043f\u043e\u043b\u043d\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u0442\u0435\u0441\u0442\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435. \u0420\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442 \u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0439.", "\u0422\u0435\u0441\u0442 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d \u043d\u0430 \u0442\u043e, \u0447\u0442\u043e\u0431\u044b \u0432\u044b\u044f\u0441\u043d\u0438\u0442\u044c, \u043d\u0435 \u043f\u043e\u043b\u0443\u0447\u0430\u0435\u0442 \u043b\u0438 \u043a\u0430\u043a\u043e\u0439-\u043b\u0438\u0431\u043e \u0443\u0447\u0430\u0441\u0442\u043e\u043a \u0441\u0435\u0440\u0434\u0435\u0447\u043d\u043e\u0439 \u043c\u044b\u0448\u0446\u044b \u0434\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u044b\u0439 \u043a\u0440\u043e\u0432\u043e\u0442\u043e\u043a \u0432\u043e \u0432\u0440\u0435\u043c\u044f \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438. \u041e\u043d \u043f\u043e\u0445\u043e\u0436 \u043d\u0430 \u0442\u0435\u0441\u0442 \u0441 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u043d\u0430\u0433\u0440\u0443\u0437\u043a\u043e\u0439, \u0444\u0430\u0440\u043c\u0430\u043a\u043e\u043b\u043e\u0433\u0438\u0447\u0435\u0441\u043a\u0438\u0439 \u0438\u043b\u0438 \u0445\u0438\u043c\u0438\u0447\u0435\u0441\u043a\u0438\u0439 \u0441\u0442\u0440\u0435\u0441\u0441-\u0442\u0435\u0441\u0442. \u041e\u043d \u0442\u0430\u043a\u0436\u0435 \u0438\u0437\u0432\u0435\u0441\u0442\u0435\u043d \u043f\u0440\u0438 \u0441\u0442\u0440\u0435\u0441\u0441-\u0442\u0435\u0441\u0442\u0435 \u0442\u0430\u043b\u043b\u0438\u0435\u043c, \u0441\u043a\u0430\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0438 \u043f\u0435\u0440\u0444\u0443\u0437\u0438\u0438 \u043c\u0438\u043e\u043a\u0430\u0440\u0434\u0430 \u0438\u043b\u0438 \u0440\u0430\u0434\u0438\u043e\u043d\u0443\u043a\u043b\u0438\u0434\u043d\u043e\u043c \u0442\u0435\u0441\u0442\u0435."], "example_title": "Example 2"}], "model-index": [{"name": "rubert-bi-encoder-mmarcoRU", "results": [{"task": {"type": "Retrieval"}, "dataset": {"name": "mMARCO (Russian)", "type": "unicamp-dl/mmarco", "split": "test"}, "metrics": [{"type": "cos_sim-Recall@5", "value": 0.9997142857142856}, {"type": "cos_sim-MRR@10", "value": 0.9859809523809522}, {"type": "cos_sim-NDCG@10", "value": 0.9895648869214424}, {"type": "cos_sim-MAP@100", "value": 0.9859928571428572}, {"type": "dot_score-Recall@5", "value": 0.9995714285714286}, {"type": "dot_score-MRR@10", "value": 0.9821190476190476}, {"type": "dot_score-NDCG@10", "value": 0.986705516337711}, {"type": "dot_score-MAP@100", "value": 0.9821300366300368}]}]}]} | DiTy/bi-encoder-russian-msmarco | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"rubert",
"bi-encoder",
"msmarco",
"ru",
"dataset:unicamp-dl/mmarco",
"base_model:DeepPavlov/rubert-base-cased",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:56:39+00:00 | [] | [
"ru"
] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #rubert #bi-encoder #msmarco #ru #dataset-unicamp-dl/mmarco #base_model-DeepPavlov/rubert-base-cased #model-index #endpoints_compatible #region-us
|
# DiTy/bi-encoder-russian-msmarco
This is a sentence-transformers model based on a pre-trained DeepPavlov/rubert-base-cased and finetuned with MS-MARCO Russian passage ranking dataset:
It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for asymmetric semantic search in the Russian language.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1989041 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
| [
"# DiTy/bi-encoder-russian-msmarco\n\nThis is a sentence-transformers model based on a pre-trained DeepPavlov/rubert-base-cased and finetuned with MS-MARCO Russian passage ranking dataset:\nIt maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for asymmetric semantic search in the Russian language.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1989041 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #rubert #bi-encoder #msmarco #ru #dataset-unicamp-dl/mmarco #base_model-DeepPavlov/rubert-base-cased #model-index #endpoints_compatible #region-us \n",
"# DiTy/bi-encoder-russian-msmarco\n\nThis is a sentence-transformers model based on a pre-trained DeepPavlov/rubert-base-cased and finetuned with MS-MARCO Russian passage ranking dataset:\nIt maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for asymmetric semantic search in the Russian language.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1989041 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kiran158/fine_tuned_bert-base-uncased-2 | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:57:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo HuggingFaceM4/tiny-random-LlamaForCausalLM installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/HuggingFaceM4-tiny-random-LlamaForCausalLM-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/HuggingFaceM4-tiny-random-LlamaForCausalLM-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceM4/tiny-random-LlamaForCausalLM")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model HuggingFaceM4/tiny-random-LlamaForCausalLM before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/HuggingFaceM4-tiny-random-LlamaForCausalLM-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:59:04+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo HuggingFaceM4/tiny-random-LlamaForCausalLM installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model HuggingFaceM4/tiny-random-LlamaForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo HuggingFaceM4/tiny-random-LlamaForCausalLM installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model HuggingFaceM4/tiny-random-LlamaForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo HuggingFaceM4/tiny-random-LlamaForCausalLM installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model HuggingFaceM4/tiny-random-LlamaForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/KoboldAI-OPT-2.7B-Erebus-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/KoboldAI-OPT-2.7B-Erebus-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("KoboldAI/OPT-2.7B-Erebus")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/KoboldAI-OPT-2.7B-Erebus-HQQ-2bit-smashed | null | [
"transformers",
"opt",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:00:17+00:00 | [] | [] | TAGS
#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | ichrak550/model3 | null | [
"peft",
"region:us"
] | null | 2024-04-16T15:01:19+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
sentence-similarity | sentence-transformers |
# luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1472 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1.0}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-08
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 662,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64 | null | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:01:54+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1472 with parameters:
Loss:
'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1472 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# luiz-and-robert-thesis/all-mpnet-lr5e-8-margin-1-ep-3-bs-64\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1472 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-27-layer](https://huggingface.co/Citaman/command-r-27-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-27-layer
layer_range: [0, 26]
- model: Citaman/command-r-27-layer
layer_range: [1, 27]
merge_method: slerp
base_model: Citaman/command-r-27-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-27-layer"]} | Citaman/command-r-26-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-27-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:03:30+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-27-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-27-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-27-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-27-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-27-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo fxmarty/tiny-llama-fast-tokenizer installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/fxmarty-tiny-llama-fast-tokenizer-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/fxmarty-tiny-llama-fast-tokenizer-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("fxmarty/tiny-llama-fast-tokenizer")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model fxmarty/tiny-llama-fast-tokenizer before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/fxmarty-tiny-llama-fast-tokenizer-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:03:40+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo fxmarty/tiny-llama-fast-tokenizer installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model fxmarty/tiny-llama-fast-tokenizer before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo fxmarty/tiny-llama-fast-tokenizer installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model fxmarty/tiny-llama-fast-tokenizer before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo fxmarty/tiny-llama-fast-tokenizer installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model fxmarty/tiny-llama-fast-tokenizer before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers |
# Spaetzle-v61-7b
Spaetzle-v61-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)
* [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
## 🧩 Configuration
```yaml
models:
- model: LeoLM/leo-mistral-hessianai-7b-chat
# no parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.60
weight: 0.30
- model: mayflowergmbh/Wiedervereinigung-7b-dpo
parameters:
density: 0.65
weight: 0.40
- model: cognitivecomputations/openchat-3.5-0106-laser
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b-chat
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v61-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "FelixChao/WestSeverus-7B-DPO-v2", "mayflowergmbh/Wiedervereinigung-7b-dpo", "cognitivecomputations/openchat-3.5-0106-laser"], "base_model": ["FelixChao/WestSeverus-7B-DPO-v2", "mayflowergmbh/Wiedervereinigung-7b-dpo", "cognitivecomputations/openchat-3.5-0106-laser"]} | cstr/Spaetzle-v61-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"mayflowergmbh/Wiedervereinigung-7b-dpo",
"cognitivecomputations/openchat-3.5-0106-laser",
"conversational",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:mayflowergmbh/Wiedervereinigung-7b-dpo",
"base_model:cognitivecomputations/openchat-3.5-0106-laser",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:07:42+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #FelixChao/WestSeverus-7B-DPO-v2 #mayflowergmbh/Wiedervereinigung-7b-dpo #cognitivecomputations/openchat-3.5-0106-laser #conversational #base_model-FelixChao/WestSeverus-7B-DPO-v2 #base_model-mayflowergmbh/Wiedervereinigung-7b-dpo #base_model-cognitivecomputations/openchat-3.5-0106-laser #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v61-7b
Spaetzle-v61-7b is a merge of the following models using LazyMergekit:
* FelixChao/WestSeverus-7B-DPO-v2
* mayflowergmbh/Wiedervereinigung-7b-dpo
* cognitivecomputations/openchat-3.5-0106-laser
## Configuration
## Usage
| [
"# Spaetzle-v61-7b\n\nSpaetzle-v61-7b is a merge of the following models using LazyMergekit:\n* FelixChao/WestSeverus-7B-DPO-v2\n* mayflowergmbh/Wiedervereinigung-7b-dpo\n* cognitivecomputations/openchat-3.5-0106-laser",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #FelixChao/WestSeverus-7B-DPO-v2 #mayflowergmbh/Wiedervereinigung-7b-dpo #cognitivecomputations/openchat-3.5-0106-laser #conversational #base_model-FelixChao/WestSeverus-7B-DPO-v2 #base_model-mayflowergmbh/Wiedervereinigung-7b-dpo #base_model-cognitivecomputations/openchat-3.5-0106-laser #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v61-7b\n\nSpaetzle-v61-7b is a merge of the following models using LazyMergekit:\n* FelixChao/WestSeverus-7B-DPO-v2\n* mayflowergmbh/Wiedervereinigung-7b-dpo\n* cognitivecomputations/openchat-3.5-0106-laser",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Reza-Barati/distilbert-base-uncased-finetuned-for-phishing-detection-Cybersecurity
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0630
- Validation Loss: 0.0990
- Train Accuracy: 0.9679
- Train Precision: 0.9693
- Train Recall: 0.9540
- Train F1: 0.9616
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9708, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Recall | Train F1 | Epoch |
|:----------:|:---------------:|:--------------:|:---------------:|:------------:|:--------:|:-----:|
| 0.1612 | 0.1122 | 0.9573 | 0.9776 | 0.9198 | 0.9478 | 0 |
| 0.0630 | 0.0990 | 0.9679 | 0.9693 | 0.9540 | 0.9616 | 1 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "Reza-Barati/distilbert-base-uncased-finetuned-for-phishing-detection-Cybersecurity", "results": []}]} | Reza-Barati/distilbert-base-uncased-finetuned-for-phishing-detection-cybersecurity | null | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:08:21+00:00 | [] | [] | TAGS
#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Reza-Barati/distilbert-base-uncased-finetuned-for-phishing-detection-Cybersecurity
==================================================================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0630
* Validation Loss: 0.0990
* Train Accuracy: 0.9679
* Train Precision: 0.9693
* Train Recall: 0.9540
* Train F1: 0.9616
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 9708, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 9708, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 9708, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | setfit |
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
- **medium article:** https://medium.com/ai-in-plain-english/setfit-efficient-few-shot-learning-with-sentence-transformers-65656ccf3286
-
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'stale and uninspired . '</li><li>"the film 's considered approach to its subject matter is too calm and thoughtful for agitprop , and the thinness of its characterizations makes it a failure as straight drama . ' "</li><li>"that their charm does n't do a load of good "</li></ul> |
| 1 | <ul><li>"broomfield is energized by volletta wallace 's maternal fury , her fearlessness "</li><li>'flawless '</li><li>'insightfully written , delicately performed '</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8601 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("frankmorales2020/my-awesome-setfit-model")
# Run inference
preds = model("the most compelling wiseman epic of recent years . ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 11.4375 | 33 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2184 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 2.6.1
- Transformers: 4.38.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "widget": [{"text": "a literate presentation that wonderfully weaves a murderous event in 1873 with murderous rage in 2002 . "}, {"text": "an entertaining , colorful , action-filled crime story with an intimate heart . "}, {"text": "drops you into a dizzying , volatile , pressure-cooker of a situation that quickly snowballs out of control , while focusing on the what much more than the why . "}, {"text": "the most compelling wiseman epic of recent years . "}, {"text": "in the end , the movie collapses on its shaky foundation despite the best efforts of director joe carnahan . "}], "pipeline_tag": "text-classification", "inference": true, "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8600917431192661, "name": "Accuracy"}]}]}]} | frankmorales2020/my-awesome-setfit-model | null | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | null | 2024-04-16T15:11:23+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us
| SetFit with sentence-transformers/paraphrase-mpnet-base-v2
==========================================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 512 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
* medium article: URL
*
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 16)
* num\_epochs: (1, 1)
* max\_steps: -1
* sampling\_strategy: oversampling
* num\_iterations: 20
* body\_learning\_rate: (2e-05, 2e-05)
* head\_learning\_rate: 2e-05
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.1.0.dev0
* Sentence Transformers: 2.6.1
* Transformers: 4.38.2
* PyTorch: 2.2.1+cu121
* Datasets: 2.18.0
* Tokenizers: 0.15.2
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n* medium article: URL\n*",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.1.0.dev0\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n* medium article: URL\n*",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.1.0.dev0\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ryan04162024_ALLDATA_NEW
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the properties dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Ordinal Mae: 0.3409
- Ordinal Accuracy: 0.7827
- Na Accuracy: 0.9433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.02
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "ryan04162024_ALLDATA_NEW", "results": []}]} | rshrott/ryan04162024_ALLDATA_NEW | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-16T15:11:52+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space
|
# ryan04162024_ALLDATA_NEW
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Ordinal Mae: 0.3409
- Ordinal Accuracy: 0.7827
- Na Accuracy: 0.9433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.02
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# ryan04162024_ALLDATA_NEW\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1165\n- Ordinal Mae: 0.3409\n- Ordinal Accuracy: 0.7827\n- Na Accuracy: 0.9433",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.02\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us #has_space \n",
"# ryan04162024_ALLDATA_NEW\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1165\n- Ordinal Mae: 0.3409\n- Ordinal Accuracy: 0.7827\n- Na Accuracy: 0.9433",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.02\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo EleutherAI/pythia-410m-deduped installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/EleutherAI-pythia-410m-deduped-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/EleutherAI-pythia-410m-deduped-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-410m-deduped")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-410m-deduped before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/EleutherAI-pythia-410m-deduped-HQQ-2bit-smashed | null | [
"transformers",
"gpt_neox",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:12:04+00:00 | [] | [] | TAGS
#transformers #gpt_neox #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo EleutherAI/pythia-410m-deduped installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-410m-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo EleutherAI/pythia-410m-deduped installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-410m-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gpt_neox #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo EleutherAI/pythia-410m-deduped installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-410m-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | AKbuyer/layoutlm-base-uncased_classifier_custom | null | [
"transformers",
"safetensors",
"layoutlm",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:13:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #layoutlm #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #layoutlm #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/facebook-xglm-564M-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/facebook-xglm-564M-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/facebook-xglm-564M-HQQ-2bit-smashed | null | [
"transformers",
"xglm",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:13:57+00:00 | [] | [] | TAGS
#transformers #xglm #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #xglm #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2950
- F1 Score: 0.6776
- Accuracy: 0.6786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6232 | 16.67 | 200 | 0.6238 | 0.6755 | 0.6772 |
| 0.5254 | 33.33 | 400 | 0.6475 | 0.6711 | 0.6709 |
| 0.4768 | 50.0 | 600 | 0.7163 | 0.6747 | 0.6807 |
| 0.4297 | 66.67 | 800 | 0.7852 | 0.6653 | 0.6699 |
| 0.3926 | 83.33 | 1000 | 0.7687 | 0.6678 | 0.6678 |
| 0.3588 | 100.0 | 1200 | 0.8240 | 0.6732 | 0.6748 |
| 0.3328 | 116.67 | 1400 | 0.8116 | 0.6703 | 0.6706 |
| 0.3053 | 133.33 | 1600 | 0.8527 | 0.6679 | 0.6696 |
| 0.2838 | 150.0 | 1800 | 0.8685 | 0.6670 | 0.6664 |
| 0.2632 | 166.67 | 2000 | 0.9374 | 0.6715 | 0.6748 |
| 0.2455 | 183.33 | 2200 | 0.9754 | 0.6695 | 0.6723 |
| 0.2277 | 200.0 | 2400 | 0.9817 | 0.6670 | 0.6720 |
| 0.2146 | 216.67 | 2600 | 0.9893 | 0.6659 | 0.6657 |
| 0.2006 | 233.33 | 2800 | 1.0066 | 0.6705 | 0.6734 |
| 0.1888 | 250.0 | 3000 | 1.0542 | 0.6693 | 0.6699 |
| 0.1759 | 266.67 | 3200 | 1.0377 | 0.6745 | 0.6748 |
| 0.1694 | 283.33 | 3400 | 1.0064 | 0.6726 | 0.6727 |
| 0.1603 | 300.0 | 3600 | 1.0415 | 0.6735 | 0.6730 |
| 0.1536 | 316.67 | 3800 | 1.0917 | 0.6709 | 0.6727 |
| 0.1463 | 333.33 | 4000 | 1.1264 | 0.6745 | 0.6758 |
| 0.139 | 350.0 | 4200 | 1.0661 | 0.6658 | 0.6661 |
| 0.133 | 366.67 | 4400 | 1.1465 | 0.6721 | 0.6720 |
| 0.127 | 383.33 | 4600 | 1.1429 | 0.6740 | 0.6758 |
| 0.123 | 400.0 | 4800 | 1.1352 | 0.6770 | 0.6775 |
| 0.119 | 416.67 | 5000 | 1.1215 | 0.6785 | 0.6786 |
| 0.1134 | 433.33 | 5200 | 1.1705 | 0.6814 | 0.6824 |
| 0.109 | 450.0 | 5400 | 1.1407 | 0.6768 | 0.6765 |
| 0.1052 | 466.67 | 5600 | 1.2160 | 0.6752 | 0.6782 |
| 0.1028 | 483.33 | 5800 | 1.1816 | 0.6784 | 0.6810 |
| 0.0983 | 500.0 | 6000 | 1.2036 | 0.6805 | 0.6820 |
| 0.0945 | 516.67 | 6200 | 1.2191 | 0.6806 | 0.6820 |
| 0.0932 | 533.33 | 6400 | 1.2072 | 0.6789 | 0.6807 |
| 0.0898 | 550.0 | 6600 | 1.2167 | 0.6849 | 0.6855 |
| 0.0883 | 566.67 | 6800 | 1.2070 | 0.6831 | 0.6848 |
| 0.0863 | 583.33 | 7000 | 1.2688 | 0.6827 | 0.6852 |
| 0.0844 | 600.0 | 7200 | 1.2783 | 0.6829 | 0.6845 |
| 0.0808 | 616.67 | 7400 | 1.2535 | 0.6814 | 0.6831 |
| 0.0795 | 633.33 | 7600 | 1.2402 | 0.6844 | 0.6859 |
| 0.0779 | 650.0 | 7800 | 1.2869 | 0.6872 | 0.6886 |
| 0.0765 | 666.67 | 8000 | 1.2739 | 0.6858 | 0.6876 |
| 0.0745 | 683.33 | 8200 | 1.2936 | 0.6918 | 0.6931 |
| 0.0736 | 700.0 | 8400 | 1.2456 | 0.6865 | 0.6869 |
| 0.0725 | 716.67 | 8600 | 1.2791 | 0.6911 | 0.6935 |
| 0.0716 | 733.33 | 8800 | 1.2987 | 0.6843 | 0.6855 |
| 0.0705 | 750.0 | 9000 | 1.2853 | 0.6884 | 0.6893 |
| 0.0687 | 766.67 | 9200 | 1.3006 | 0.6897 | 0.6907 |
| 0.0681 | 783.33 | 9400 | 1.2745 | 0.6914 | 0.6928 |
| 0.0668 | 800.0 | 9600 | 1.2815 | 0.6919 | 0.6931 |
| 0.0672 | 816.67 | 9800 | 1.3004 | 0.6930 | 0.6945 |
| 0.0663 | 833.33 | 10000 | 1.3010 | 0.6942 | 0.6956 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T15:15:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_56M-L32\_all
=====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2950
* F1 Score: 0.6776
* Accuracy: 0.6786
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | thusinh1969/flant5xl-rerank-1APRIL2024-2M | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:16:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWIN-AI-Image-Detector
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0461
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2411 | 1.0 | 547 | 0.2430 | 0.9013 |
| 0.2011 | 2.0 | 1094 | 0.1053 | 0.9593 |
| 0.1722 | 3.0 | 1641 | 0.0825 | 0.9671 |
| 0.1424 | 4.0 | 2188 | 0.0851 | 0.9686 |
| 0.1244 | 5.0 | 2735 | 0.0714 | 0.9733 |
| 0.1089 | 6.0 | 3282 | 0.0712 | 0.9734 |
| 0.1047 | 7.0 | 3829 | 0.0461 | 0.9833 |
| 0.1079 | 8.0 | 4376 | 0.0454 | 0.9829 |
| 0.0805 | 9.0 | 4923 | 0.0577 | 0.9790 |
| 0.0778 | 10.0 | 5470 | 0.0539 | 0.9807 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "SWIN-AI-Image-Detector", "results": []}]} | mmanikanta/SWIN-AI-Image-Detector | null | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:18:06+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #swin #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| SWIN-AI-Image-Detector
======================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0461
* Accuracy: 0.9833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.30.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #swin #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo hf-tiny-model-private/tiny-random-BloomForCausalLM installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/hf-tiny-model-private-tiny-random-BloomForCausalLM-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/hf-tiny-model-private-tiny-random-BloomForCausalLM-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("hf-tiny-model-private/tiny-random-BloomForCausalLM")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hf-tiny-model-private/tiny-random-BloomForCausalLM before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/hf-tiny-model-private-tiny-random-BloomForCausalLM-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:18:07+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo hf-tiny-model-private/tiny-random-BloomForCausalLM installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hf-tiny-model-private/tiny-random-BloomForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo hf-tiny-model-private/tiny-random-BloomForCausalLM installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model hf-tiny-model-private/tiny-random-BloomForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo hf-tiny-model-private/tiny-random-BloomForCausalLM installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model hf-tiny-model-private/tiny-random-BloomForCausalLM before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver15
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver15", "results": []}]} | comet24082002/finetuned_bge_ver15 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:18:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver15
This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# finetuned_bge_ver15\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver15\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# WizardLM-2-8x22B - EXL2 6.0bpw
This is a 6.0bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"} | Dracones/WizardLM-2-8x22B_exl2_6.0bpw | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-16T15:22:45+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
| WizardLM-2-8x22B - EXL2 6.0bpw
==============================
This is a 6.0bpw EXL2 quant of microsoft/WizardLM-2-8x22B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-llama
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6966
- Accuracy: 0.8195
- Precision: 0.8222
- Recall: 0.8195
- Precision Macro: 0.7955
- Recall Macro: 0.7536
- Macro Fpr: 0.0148
- Weighted Fpr: 0.0141
- Weighted Specificity: 0.9765
- Macro Specificity: 0.9873
- Weighted Sensitivity: 0.8327
- Macro Sensitivity: 0.7536
- F1 Micro: 0.8327
- F1 Macro: 0.7609
- F1 Weighted: 0.8291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| 1.0444 | 1.0 | 642 | 0.5968 | 0.8056 | 0.8050 | 0.8056 | 0.7122 | 0.6995 | 0.0175 | 0.0169 | 0.9730 | 0.9852 | 0.8056 | 0.6995 | 0.8056 | 0.6986 | 0.8014 |
| 0.4788 | 2.0 | 1284 | 0.6966 | 0.8195 | 0.8222 | 0.8195 | 0.8092 | 0.7825 | 0.0161 | 0.0155 | 0.9755 | 0.9863 | 0.8195 | 0.7825 | 0.8195 | 0.7849 | 0.8172 |
| 0.3354 | 3.0 | 1926 | 0.8046 | 0.8327 | 0.8276 | 0.8327 | 0.8058 | 0.7582 | 0.0148 | 0.0141 | 0.9758 | 0.9872 | 0.8327 | 0.7582 | 0.8327 | 0.7742 | 0.8282 |
| 0.0571 | 4.0 | 2569 | 1.1143 | 0.8265 | 0.8312 | 0.8265 | 0.7904 | 0.7763 | 0.0152 | 0.0148 | 0.9772 | 0.9869 | 0.8265 | 0.7763 | 0.8265 | 0.7690 | 0.8262 |
| 0.0187 | 5.0 | 3211 | 1.1104 | 0.8319 | 0.8316 | 0.8319 | 0.7745 | 0.7724 | 0.0149 | 0.0142 | 0.9770 | 0.9873 | 0.8319 | 0.7724 | 0.8319 | 0.7638 | 0.8303 |
| 0.0071 | 6.0 | 3853 | 1.1445 | 0.8242 | 0.8210 | 0.8242 | 0.7684 | 0.7384 | 0.0157 | 0.0150 | 0.9755 | 0.9866 | 0.8242 | 0.7384 | 0.8242 | 0.7451 | 0.8209 |
| 0.0002 | 7.0 | 4495 | 1.2032 | 0.8327 | 0.8302 | 0.8327 | 0.7985 | 0.7529 | 0.0148 | 0.0141 | 0.9765 | 0.9873 | 0.8327 | 0.7529 | 0.8327 | 0.7617 | 0.8293 |
| 0.0028 | 8.0 | 5138 | 1.1918 | 0.8257 | 0.8226 | 0.8257 | 0.7738 | 0.7493 | 0.0155 | 0.0149 | 0.9756 | 0.9868 | 0.8257 | 0.7493 | 0.8257 | 0.7552 | 0.8229 |
| 0.0 | 9.0 | 5780 | 1.2181 | 0.8311 | 0.8286 | 0.8311 | 0.7935 | 0.7522 | 0.0150 | 0.0143 | 0.9764 | 0.9872 | 0.8311 | 0.7522 | 0.8311 | 0.7592 | 0.8276 |
| 0.0018 | 10.0 | 6420 | 1.2265 | 0.8327 | 0.8301 | 0.8327 | 0.7955 | 0.7536 | 0.0148 | 0.0141 | 0.9765 | 0.9873 | 0.8327 | 0.7536 | 0.8327 | 0.7609 | 0.8291 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "tiny-llama", "results": []}]} | xshubhamx/tiny-llama-lora | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T15:23:01+00:00 | [] | [] | TAGS
#tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
| tiny-llama
==========
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6966
* Accuracy: 0.8195
* Precision: 0.8222
* Recall: 0.8195
* Precision Macro: 0.7955
* Recall Macro: 0.7536
* Macro Fpr: 0.0148
* Weighted Fpr: 0.0141
* Weighted Specificity: 0.9765
* Macro Specificity: 0.9873
* Weighted Sensitivity: 0.8327
* Macro Sensitivity: 0.7536
* F1 Micro: 0.8327
* F1 Macro: 0.7609
* F1 Weighted: 0.8291
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] | [
"TAGS\n#tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | udit-k/MentalHealth-Mistral7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-16T15:23:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers, tokenizers |
# Model Card for Model ID
This is a tokenizer that trained on a large corpus in Azerbaijani
## Model Details
### Model Description
This is a tokenizer trained on nearly 6gb text dataset in Azerbaijani.
I trained Byte-fallback BPE tokenizer on this dataset and
I used the similar parameters that used in tokenizer of Mistral. Like sentencepiece "_" used for the beginning of the pieces in the sub-words.
- **Developed by:** Javidan Aslanli
- **Language(s) (NLP):** Azerbaijani
- **License:** Apache license 2.0
## Bias, Risks, and Limitations
It can be developed with training on larger corpus. But generally it works very well.
## Training Details
### Training Data
Training data is a large corpus in Azerbaijani which its nearly 6gb.
### Training:
This is a Byte-fallback BPE tokenizer. What I used in tokenizer is:
- Normalizers are same with the tokenizer of Mistral's normalizers
- I used Meta-Space pre-tokenizer before training BPE.
- For training I used Byte-fallback trick and other parameters are same with Mistral's.
| {"language": ["az"], "license": "apache-2.0", "library_name": "transformers, tokenizers"} | javidanaslanli/az-tokenizer | null | [
"transformers, tokenizers",
"az",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T15:25:42+00:00 | [] | [
"az"
] | TAGS
#transformers, tokenizers #az #license-apache-2.0 #region-us
|
# Model Card for Model ID
This is a tokenizer that trained on a large corpus in Azerbaijani
## Model Details
### Model Description
This is a tokenizer trained on nearly 6gb text dataset in Azerbaijani.
I trained Byte-fallback BPE tokenizer on this dataset and
I used the similar parameters that used in tokenizer of Mistral. Like sentencepiece "_" used for the beginning of the pieces in the sub-words.
- Developed by: Javidan Aslanli
- Language(s) (NLP): Azerbaijani
- License: Apache license 2.0
## Bias, Risks, and Limitations
It can be developed with training on larger corpus. But generally it works very well.
## Training Details
### Training Data
Training data is a large corpus in Azerbaijani which its nearly 6gb.
### Training:
This is a Byte-fallback BPE tokenizer. What I used in tokenizer is:
- Normalizers are same with the tokenizer of Mistral's normalizers
- I used Meta-Space pre-tokenizer before training BPE.
- For training I used Byte-fallback trick and other parameters are same with Mistral's.
| [
"# Model Card for Model ID\n\n This is a tokenizer that trained on a large corpus in Azerbaijani",
"## Model Details",
"### Model Description\n\nThis is a tokenizer trained on nearly 6gb text dataset in Azerbaijani. \nI trained Byte-fallback BPE tokenizer on this dataset and \nI used the similar parameters that used in tokenizer of Mistral. Like sentencepiece \"_\" used for the beginning of the pieces in the sub-words.\n\n\n\n- Developed by: Javidan Aslanli\n- Language(s) (NLP): Azerbaijani\n- License: Apache license 2.0",
"## Bias, Risks, and Limitations\n\nIt can be developed with training on larger corpus. But generally it works very well.",
"## Training Details",
"### Training Data\n\n Training data is a large corpus in Azerbaijani which its nearly 6gb.",
"### Training:\nThis is a Byte-fallback BPE tokenizer. What I used in tokenizer is:\n - Normalizers are same with the tokenizer of Mistral's normalizers\n - I used Meta-Space pre-tokenizer before training BPE.\n - For training I used Byte-fallback trick and other parameters are same with Mistral's."
] | [
"TAGS\n#transformers, tokenizers #az #license-apache-2.0 #region-us \n",
"# Model Card for Model ID\n\n This is a tokenizer that trained on a large corpus in Azerbaijani",
"## Model Details",
"### Model Description\n\nThis is a tokenizer trained on nearly 6gb text dataset in Azerbaijani. \nI trained Byte-fallback BPE tokenizer on this dataset and \nI used the similar parameters that used in tokenizer of Mistral. Like sentencepiece \"_\" used for the beginning of the pieces in the sub-words.\n\n\n\n- Developed by: Javidan Aslanli\n- Language(s) (NLP): Azerbaijani\n- License: Apache license 2.0",
"## Bias, Risks, and Limitations\n\nIt can be developed with training on larger corpus. But generally it works very well.",
"## Training Details",
"### Training Data\n\n Training data is a large corpus in Azerbaijani which its nearly 6gb.",
"### Training:\nThis is a Byte-fallback BPE tokenizer. What I used in tokenizer is:\n - Normalizers are same with the tokenizer of Mistral's normalizers\n - I used Meta-Space pre-tokenizer before training BPE.\n - For training I used Byte-fallback trick and other parameters are same with Mistral's."
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | NurbekKwarto/layoutlm-base-uncased_classifier_kwarto | null | [
"transformers",
"safetensors",
"layoutlm",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:26:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #layoutlm #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #layoutlm #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo EleutherAI/pythia-2.8b-deduped installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/EleutherAI-pythia-2.8b-deduped-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/EleutherAI-pythia-2.8b-deduped-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-2.8b-deduped")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-2.8b-deduped before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/EleutherAI-pythia-2.8b-deduped-HQQ-2bit-smashed | null | [
"transformers",
"gpt_neox",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:26:02+00:00 | [] | [] | TAGS
#transformers #gpt_neox #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo EleutherAI/pythia-2.8b-deduped installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-2.8b-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo EleutherAI/pythia-2.8b-deduped installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-2.8b-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #gpt_neox #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo EleutherAI/pythia-2.8b-deduped installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model EleutherAI/pythia-2.8b-deduped before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rwr20/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | rwr20/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-16T15:26:41+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Cheng98/llama-160m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/Cheng98-llama-160m-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/Cheng98-llama-160m-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("Cheng98/llama-160m")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Cheng98/llama-160m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/Cheng98-llama-160m-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:26:58+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Cheng98/llama-160m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Cheng98/llama-160m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo Cheng98/llama-160m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model Cheng98/llama-160m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo Cheng98/llama-160m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model Cheng98/llama-160m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-26-layer](https://huggingface.co/Citaman/command-r-26-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-26-layer
layer_range: [0, 25]
- model: Citaman/command-r-26-layer
layer_range: [1, 26]
merge_method: slerp
base_model: Citaman/command-r-26-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-26-layer"]} | Citaman/command-r-25-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-26-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:27:44+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-26-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-26-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-26-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-26-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-26-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# healthinsurance_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.4556 |
| No log | 2.0 | 4 | 5.2598 |
| No log | 3.0 | 6 | 5.1759 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "healthinsurance_qa_model", "results": []}]} | vraman54/healthinsurance_qa_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:27:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| healthinsurance\_qa\_model
==========================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 5.1759
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cpu
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-vietnamese-correction
This model is a fine-tuned version of [bmd1905/vietnamese-correction](https://huggingface.co/bmd1905/vietnamese-correction) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7591
- Cer: 0.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bmd1905/vietnamese-correction", "model-index": [{"name": "my-vietnamese-correction", "results": []}]} | Zero914/spell-check | null | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:bmd1905/vietnamese-correction",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:27:54+00:00 | [] | [] | TAGS
#transformers #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-bmd1905/vietnamese-correction #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# my-vietnamese-correction
This model is a fine-tuned version of bmd1905/vietnamese-correction on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7591
- Cer: 0.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# my-vietnamese-correction\n\nThis model is a fine-tuned version of bmd1905/vietnamese-correction on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7591\n- Cer: 0.1234",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-bmd1905/vietnamese-correction #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# my-vietnamese-correction\n\nThis model is a fine-tuned version of bmd1905/vietnamese-correction on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7591\n- Cer: 0.1234",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v7 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:27:54+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | vkdhiman93/faq_Instruct | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-16T15:34:36+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DialogLED-base-16384-tms-finetuned-10epochs
This model is a fine-tuned version of [MingZhong/DialogLED-base-16384](https://huggingface.co/MingZhong/DialogLED-base-16384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.735 | 1.69 | 500 | 2.6474 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "MingZhong/DialogLED-base-16384", "model-index": [{"name": "DialogLED-base-16384-tms-finetuned-10epochs", "results": []}]} | StDestiny/DialogLED-base-16384-tms-finetuned-10epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:MingZhong/DialogLED-base-16384",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:35:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us
| DialogLED-base-16384-tms-finetuned-10epochs
===========================================
This model is a fine-tuned version of MingZhong/DialogLED-base-16384 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6474
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6684
- F1 Score: 0.5995
- Accuracy: 0.6039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6761 | 15.38 | 200 | 0.6788 | 0.5879 | 0.5919 |
| 0.5962 | 30.77 | 400 | 0.7458 | 0.5803 | 0.5802 |
| 0.5426 | 46.15 | 600 | 0.7792 | 0.5755 | 0.5754 |
| 0.4982 | 61.54 | 800 | 0.8422 | 0.5840 | 0.5859 |
| 0.4621 | 76.92 | 1000 | 0.8761 | 0.5766 | 0.5799 |
| 0.4339 | 92.31 | 1200 | 0.9602 | 0.5738 | 0.5739 |
| 0.4129 | 107.69 | 1400 | 0.9920 | 0.5870 | 0.5893 |
| 0.39 | 123.08 | 1600 | 0.9777 | 0.5780 | 0.5865 |
| 0.3676 | 138.46 | 1800 | 1.0131 | 0.5851 | 0.5890 |
| 0.3549 | 153.85 | 2000 | 0.9978 | 0.5792 | 0.5821 |
| 0.333 | 169.23 | 2200 | 1.0241 | 0.5804 | 0.5833 |
| 0.3163 | 184.62 | 2400 | 1.0588 | 0.5839 | 0.5849 |
| 0.2999 | 200.0 | 2600 | 1.0728 | 0.5785 | 0.5789 |
| 0.2872 | 215.38 | 2800 | 1.1023 | 0.5791 | 0.5808 |
| 0.274 | 230.77 | 3000 | 1.1682 | 0.5712 | 0.5789 |
| 0.262 | 246.15 | 3200 | 1.1547 | 0.5761 | 0.5792 |
| 0.2492 | 261.54 | 3400 | 1.1841 | 0.5799 | 0.5868 |
| 0.2386 | 276.92 | 3600 | 1.1781 | 0.5760 | 0.5777 |
| 0.2287 | 292.31 | 3800 | 1.2014 | 0.5792 | 0.5833 |
| 0.2214 | 307.69 | 4000 | 1.2274 | 0.5835 | 0.5859 |
| 0.2126 | 323.08 | 4200 | 1.1768 | 0.5769 | 0.5789 |
| 0.2028 | 338.46 | 4400 | 1.2579 | 0.5782 | 0.5818 |
| 0.1969 | 353.85 | 4600 | 1.2400 | 0.5733 | 0.5742 |
| 0.1896 | 369.23 | 4800 | 1.2533 | 0.5730 | 0.5745 |
| 0.184 | 384.62 | 5000 | 1.2483 | 0.5829 | 0.5830 |
| 0.1768 | 400.0 | 5200 | 1.3081 | 0.5772 | 0.5789 |
| 0.1715 | 415.38 | 5400 | 1.2958 | 0.5745 | 0.5751 |
| 0.1661 | 430.77 | 5600 | 1.3166 | 0.5800 | 0.5814 |
| 0.1615 | 446.15 | 5800 | 1.3131 | 0.5789 | 0.5789 |
| 0.1572 | 461.54 | 6000 | 1.3796 | 0.5763 | 0.5814 |
| 0.1524 | 476.92 | 6200 | 1.3631 | 0.5753 | 0.5767 |
| 0.1477 | 492.31 | 6400 | 1.4063 | 0.5782 | 0.5818 |
| 0.145 | 507.69 | 6600 | 1.3589 | 0.5779 | 0.5792 |
| 0.1429 | 523.08 | 6800 | 1.4140 | 0.5828 | 0.5868 |
| 0.1393 | 538.46 | 7000 | 1.4078 | 0.5756 | 0.5761 |
| 0.1362 | 553.85 | 7200 | 1.3951 | 0.5827 | 0.5862 |
| 0.1327 | 569.23 | 7400 | 1.3995 | 0.5746 | 0.5754 |
| 0.1298 | 584.62 | 7600 | 1.4096 | 0.5734 | 0.5742 |
| 0.1279 | 600.0 | 7800 | 1.4686 | 0.5778 | 0.5799 |
| 0.1256 | 615.38 | 8000 | 1.4455 | 0.5748 | 0.5764 |
| 0.1227 | 630.77 | 8200 | 1.4572 | 0.5755 | 0.5777 |
| 0.1212 | 646.15 | 8400 | 1.4881 | 0.5785 | 0.5814 |
| 0.1201 | 661.54 | 8600 | 1.4542 | 0.5799 | 0.5827 |
| 0.1177 | 676.92 | 8800 | 1.4841 | 0.5779 | 0.5818 |
| 0.116 | 692.31 | 9000 | 1.4624 | 0.5773 | 0.5789 |
| 0.1159 | 707.69 | 9200 | 1.4756 | 0.5768 | 0.5792 |
| 0.1147 | 723.08 | 9400 | 1.4693 | 0.5800 | 0.5821 |
| 0.1142 | 738.46 | 9600 | 1.4618 | 0.5789 | 0.5805 |
| 0.113 | 753.85 | 9800 | 1.4715 | 0.5789 | 0.5808 |
| 0.1125 | 769.23 | 10000 | 1.4660 | 0.5763 | 0.5783 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T15:36:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_56M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6684
* F1 Score: 0.5995
* Accuracy: 0.6039
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/6a2buhd | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:38:55+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.2.1+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "model-index": [{"name": "vit-base-patch16-224-finetuned-flower", "results": []}]} | Randy2000/vit-base-patch16-224-finetuned-flower | null | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-16T15:39:03+00:00 | [] | [] | TAGS
#transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-imagefolder #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.2.1+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
| [
"# vit-base-patch16-224-finetuned-flower\n\nThis model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.24.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.7.1\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-imagefolder #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# vit-base-patch16-224-finetuned-flower\n\nThis model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.24.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.7.1\n- Tokenizers 0.13.3"
] |
text-classification | transformers | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["vi"], "license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "pipeline_tag": "text-classification", "widget": [{"text": "T\u00f4i hi\u1ec7n \u0111ang c\u00f3 c\u00e1c tri\u1ec7u ch\u1ee9ng nh\u01b0 m\u1ecfi g\u1ed1i, t\u00ea tay, \u0111au kh\u1edbp. T\u00f4i c\u00f3 th\u1ec3 b\u1ecb b\u1ec7nh g\u00ec?"}], "base_model": "FacebookAI/xlm-roberta-base"} | PB3002/ViMedical_Diseases | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"vi",
"arxiv:1910.09700",
"base_model:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:39:05+00:00 | [
"1910.09700"
] | [
"vi"
] | TAGS
#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #vi #arxiv-1910.09700 #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| # Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #vi #arxiv-1910.09700 #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "dpo"]} | kai-oh/mistral-7b-dpo-best-hf-back | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:39:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-25-layer](https://huggingface.co/Citaman/command-r-25-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-25-layer
layer_range: [0, 24]
- model: Citaman/command-r-25-layer
layer_range: [1, 25]
merge_method: slerp
base_model: Citaman/command-r-25-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-25-layer"]} | Citaman/command-r-24-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-25-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:40:30+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-25-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-25-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-25-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-25-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-25-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_shp3_dpo1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5757
- Rewards/chosen: -7.8281
- Rewards/rejected: -8.8033
- Rewards/accuracies: 0.5900
- Rewards/margins: 0.9752
- Logps/rejected: -333.4745
- Logps/chosen: -309.6530
- Logits/rejected: -1.1752
- Logits/chosen: -1.1933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0516 | 2.67 | 100 | 1.3486 | -7.4806 | -8.2143 | 0.5600 | 0.7337 | -327.5847 | -306.1782 | -0.9599 | -0.9344 |
| 0.0009 | 5.33 | 200 | 1.4038 | -6.2184 | -6.9730 | 0.6000 | 0.7545 | -315.1709 | -293.5561 | -1.1603 | -1.1661 |
| 0.0001 | 8.0 | 300 | 1.4280 | -6.9990 | -7.9051 | 0.5800 | 0.9061 | -324.4926 | -301.3619 | -1.1942 | -1.2090 |
| 0.0001 | 10.67 | 400 | 1.5040 | -7.4341 | -8.3860 | 0.5900 | 0.9519 | -329.3012 | -305.7131 | -1.1850 | -1.2017 |
| 0.0001 | 13.33 | 500 | 1.5476 | -7.6461 | -8.6008 | 0.6000 | 0.9548 | -331.4495 | -307.8324 | -1.1804 | -1.1981 |
| 0.0001 | 16.0 | 600 | 1.5649 | -7.7575 | -8.7226 | 0.5900 | 0.9651 | -332.6667 | -308.9463 | -1.1773 | -1.1949 |
| 0.0001 | 18.67 | 700 | 1.5789 | -7.8106 | -8.7769 | 0.6000 | 0.9663 | -333.2098 | -309.4775 | -1.1762 | -1.1945 |
| 0.0001 | 21.33 | 800 | 1.5773 | -7.8302 | -8.8009 | 0.5900 | 0.9706 | -333.4498 | -309.6743 | -1.1754 | -1.1931 |
| 0.0001 | 24.0 | 900 | 1.5810 | -7.8322 | -8.8002 | 0.5900 | 0.9680 | -333.4431 | -309.6937 | -1.1751 | -1.1931 |
| 0.0001 | 26.67 | 1000 | 1.5757 | -7.8281 | -8.8033 | 0.5900 | 0.9752 | -333.4745 | -309.6530 | -1.1752 | -1.1933 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp3_dpo1", "results": []}]} | guoyu-zhang/model_hh_shp3_dpo1 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T15:40:44+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_shp3\_dpo1
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5757
* Rewards/chosen: -7.8281
* Rewards/rejected: -8.8033
* Rewards/accuracies: 0.5900
* Rewards/margins: 0.9752
* Logps/rejected: -333.4745
* Logps/chosen: -309.6530
* Logits/rejected: -1.1752
* Logits/chosen: -1.1933
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v7 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:41:12+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinfroce-CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | lacknerm/Reinfroce-CartPole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T15:41:54+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ROCO_llava-v1.6-mistral-pmc_PMC
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.9656
- eval_runtime: 4067.2626
- eval_samples_per_second: 2.01
- eval_steps_per_second: 1.005
- epoch: 0.73
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "ROCO_llava-v1.6-mistral-pmc_PMC", "results": []}]} | Theon1130/ROCO_llava-v1.6-mistral-pmc_PMC | null | [
"transformers",
"safetensors",
"llava_mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:42:20+00:00 | [] | [] | TAGS
#transformers #safetensors #llava_mistral #text-generation #generated_from_trainer #conversational #autotrain_compatible #endpoints_compatible #region-us
|
# ROCO_llava-v1.6-mistral-pmc_PMC
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.9656
- eval_runtime: 4067.2626
- eval_samples_per_second: 2.01
- eval_steps_per_second: 1.005
- epoch: 0.73
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| [
"# ROCO_llava-v1.6-mistral-pmc_PMC\n\nThis model was trained from scratch on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.9656\n- eval_runtime: 4067.2626\n- eval_samples_per_second: 2.01\n- eval_steps_per_second: 1.005\n- epoch: 0.73\n- step: 3000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.1.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #safetensors #llava_mistral #text-generation #generated_from_trainer #conversational #autotrain_compatible #endpoints_compatible #region-us \n",
"# ROCO_llava-v1.6-mistral-pmc_PMC\n\nThis model was trained from scratch on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.9656\n- eval_runtime: 4067.2626\n- eval_samples_per_second: 2.01\n- eval_steps_per_second: 1.005\n- epoch: 0.73\n- step: 3000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.1.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] |
text-generation | transformers |
# arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp
arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Equall/Saul-Instruct-v1
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Equall/Saul-Instruct-v1"]} | arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Equall/Saul-Instruct-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:42:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #mistralai/Mistral-7B-Instruct-v0.2 #Equall/Saul-Instruct-v1 #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp
arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp is a merge of the following models using mergekit:
* mistralai/Mistral-7B-Instruct-v0.2
* Equall/Saul-Instruct-v1
## Configuration
| [
"# arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp\n\narcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp is a merge of the following models using mergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* Equall/Saul-Instruct-v1",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #mistralai/Mistral-7B-Instruct-v0.2 #Equall/Saul-Instruct-v1 #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# arcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp\n\narcee-ai/Saul-Instruct-Mistral-7B-Instruct-v0.2-Slerp is a merge of the following models using mergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* Equall/Saul-Instruct-v1",
"## Configuration"
] |
text-generation | transformers |
# Spaetzle-v62-7b
Spaetzle-v62-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [cstr/Spaetzle-v61-7b](https://huggingface.co/cstr/Spaetzle-v61-7b)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
## 🧩 Configuration
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: cstr/Spaetzle-v61-7b
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v62-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "flemmingmiguel/NeuDist-Ro-7B", "cstr/Spaetzle-v61-7b", "ResplendentAI/Flora_DPO_7B"], "base_model": ["flemmingmiguel/NeuDist-Ro-7B", "cstr/Spaetzle-v61-7b", "ResplendentAI/Flora_DPO_7B"]} | cstr/Spaetzle-v62-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/NeuDist-Ro-7B",
"cstr/Spaetzle-v61-7b",
"ResplendentAI/Flora_DPO_7B",
"conversational",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:cstr/Spaetzle-v61-7b",
"base_model:ResplendentAI/Flora_DPO_7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:42:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #flemmingmiguel/NeuDist-Ro-7B #cstr/Spaetzle-v61-7b #ResplendentAI/Flora_DPO_7B #conversational #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-cstr/Spaetzle-v61-7b #base_model-ResplendentAI/Flora_DPO_7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v62-7b
Spaetzle-v62-7b is a merge of the following models using LazyMergekit:
* flemmingmiguel/NeuDist-Ro-7B
* cstr/Spaetzle-v61-7b
* ResplendentAI/Flora_DPO_7B
## Configuration
## Usage
| [
"# Spaetzle-v62-7b\n\nSpaetzle-v62-7b is a merge of the following models using LazyMergekit:\n* flemmingmiguel/NeuDist-Ro-7B\n* cstr/Spaetzle-v61-7b\n* ResplendentAI/Flora_DPO_7B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #flemmingmiguel/NeuDist-Ro-7B #cstr/Spaetzle-v61-7b #ResplendentAI/Flora_DPO_7B #conversational #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-cstr/Spaetzle-v61-7b #base_model-ResplendentAI/Flora_DPO_7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v62-7b\n\nSpaetzle-v62-7b is a merge of the following models using LazyMergekit:\n* flemmingmiguel/NeuDist-Ro-7B\n* cstr/Spaetzle-v61-7b\n* ResplendentAI/Flora_DPO_7B",
"## Configuration",
"## Usage"
] |
text-generation | transformers |
### CodeRosa-70B-AB1
ExLlamav2 4.9 bpw 8 h quants of https://huggingface.co/altomek/CodeRosa-70B-AB1
| {"language": ["en"], "license": "other", "inference": false} | altomek/CodeRosa-70B-AB1-4.9bpw-EXL2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:43:29+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #en #license-other #autotrain_compatible #text-generation-inference #region-us
|
### CodeRosa-70B-AB1
ExLlamav2 4.9 bpw 8 h quants of URL
| [
"### CodeRosa-70B-AB1\n\nExLlamav2 4.9 bpw 8 h quants of URL"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #license-other #autotrain_compatible #text-generation-inference #region-us \n",
"### CodeRosa-70B-AB1\n\nExLlamav2 4.9 bpw 8 h quants of URL"
] |
sentence-similarity | transformers |
# AviLaBSE
## Model description
This is a unified model trained over LaBSE by google [LaBSE](https://tfhub.dev/google/LaBSE/2) to add other row resourced language dimensions and then convereted to PyTorch. It can be used to map more than 250 languages to a shared vector space. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- **Model**: [HuggingFace's model hub](https://huggingface.co/sartifyllc/AviLaBSE).
- **Paper**: [arXiv](https://arxiv.org/abs/2007.01852).
- **Original TF model**: [TensorFlow Hub](https://tfhub.dev/google/LaBSE/2).
- **Blog post**: [Google AI Blog](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html).
- **Developed by:** [Sartify LLC](https://huggingface.co/sartifyllc/)
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("sartifyllc/AviLaBSE")
model = BertModel.from_pretrained("sartifyllc/AviLaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other row resourced languages:
```python
swahili_sentences = [
"mbwa",
"Mbwa ni mzuri.",
"Ninafurahia kutembea kwa muda mrefu kando ya pwani na mbwa wangu.",
]
zulu_sentences = [
"inja",
"Inja iyavuma.",
"Ngithanda ukubhema izinyawo ezidlula emanzini nabanye nomfana wami.",
]
igbo_sentences = [
"nwa nkịta",
"Nwa nkịta dị ọma.",
"Achọrọ m gaa n'okirikiri na ụzọ nke oke na mgbidi na nwa nkịta m."
]
swahili_inputs = tokenizer(swahili_sentences, return_tensors="pt", padding=True)
zulu_inputs = tokenizer(zulu_sentences, return_tensors="pt", padding=True)
igbo_inputs=tokenizer(igbo_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
swahili_outputs = model(**swahili_inputs)
zulu_outputs = model(**zulu_inputs)
igbo_outputs =model(**igbo_inputs)
swahili_embeddings = swahili_outputs.pooler_output
zulu_embeddings = zulu_outputs.pooler_output
igbo_embeddings=igbo_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, swahili_embeddings))
print(similarity(english_embeddings, zulu_embeddings))
print(similarity(swahili_embeddings, igbo_embeddings))
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
``` | {"language": ["multilingual", "af", "sq", "am", "ar", "hy", "as", "az", "eu", "be", "bn", "bs", "bg", "my", "ca", "ceb", "zh", "co", "hr", "cs", "da", "nl", "en", "eo", "et", "fi", "fr", "fy", "gl", "ka", "de", "el", "gu", "ht", "ha", "haw", "he", "hi", "hmn", "hu", "is", "ig", "id", "ga", "it", "ja", "jv", "kn", "kk", "km", "rw", "ko", "ku", "ky", "lo", "la", "lv", "lt", "lb", "mk", "mg", "ms", "ml", "mt", "mi", "mr", "mn", "ne", false, "ny", "or", "fa", "pl", "pt", "pa", "ro", "ru", "sm", "gd", "sr", "st", "sn", "si", "sk", "sl", "so", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "th", "bo", "tr", "tk", "ug", "uk", "ur", "uz", "vi", "cy", "wo", "gd", "sr", "st", "sn", "si", "sk", "sl", "so", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "th", "bo", "tr", "tk", "ug", "uk", "ur", "gd", "sr", "st", "sn", "si", "sk", "sl", "so", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "th", "bo", "tr", "tk", "ug", "uk", "ur", "uz", "vi", "cy", "wo", "gd", "sr", "st", "sn", "si", "sk", "sl", "so", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "th", "bo", "tr", "tk", "ug", "uk", "ur", "uz", "vi", "uz", "vi", "cy", "wo", "xh", "xh", "yi", "yo", "zu"], "license": "apache-2.0", "tags": ["bert", "sentence_embedding", "multilingual", "sartify", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | sartifyllc/AviLaBSE | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"sartify",
"sentence-similarity",
"af",
"sq",
"am",
"ar",
"hy",
"as",
"az",
"eu",
"be",
"bn",
"bs",
"bg",
"my",
"ca",
"ceb",
"zh",
"co",
"hr",
"cs",
"da",
"nl",
"en",
"eo",
"et",
"fi",
"fr",
"fy",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"ha",
"haw",
"he",
"hi",
"hmn",
"hu",
"is",
"ig",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"km",
"rw",
"ko",
"ku",
"ky",
"lo",
"la",
"lv",
"lt",
"lb",
"mk",
"mg",
"ms",
"ml",
"mt",
"mi",
"mr",
"mn",
"ne",
"no",
"ny",
"or",
"fa",
"pl",
"pt",
"pa",
"ro",
"ru",
"sm",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"uz",
"vi",
"cy",
"wo",
"xh",
"yi",
"yo",
"zu",
"arxiv:2007.01852",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:45:12+00:00 | [
"2007.01852"
] | [
"multilingual",
"af",
"sq",
"am",
"ar",
"hy",
"as",
"az",
"eu",
"be",
"bn",
"bs",
"bg",
"my",
"ca",
"ceb",
"zh",
"co",
"hr",
"cs",
"da",
"nl",
"en",
"eo",
"et",
"fi",
"fr",
"fy",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"ha",
"haw",
"he",
"hi",
"hmn",
"hu",
"is",
"ig",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"km",
"rw",
"ko",
"ku",
"ky",
"lo",
"la",
"lv",
"lt",
"lb",
"mk",
"mg",
"ms",
"ml",
"mt",
"mi",
"mr",
"mn",
"ne",
"no",
"ny",
"or",
"fa",
"pl",
"pt",
"pa",
"ro",
"ru",
"sm",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"uz",
"vi",
"cy",
"wo",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"uz",
"vi",
"cy",
"wo",
"gd",
"sr",
"st",
"sn",
"si",
"sk",
"sl",
"so",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"tr",
"tk",
"ug",
"uk",
"ur",
"uz",
"vi",
"uz",
"vi",
"cy",
"wo",
"xh",
"xh",
"yi",
"yo",
"zu"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #sentence_embedding #multilingual #sartify #sentence-similarity #af #sq #am #ar #hy #as #az #eu #be #bn #bs #bg #my #ca #ceb #zh #co #hr #cs #da #nl #en #eo #et #fi #fr #fy #gl #ka #de #el #gu #ht #ha #haw #he #hi #hmn #hu #is #ig #id #ga #it #ja #jv #kn #kk #km #rw #ko #ku #ky #lo #la #lv #lt #lb #mk #mg #ms #ml #mt #mi #mr #mn #ne #no #ny #or #fa #pl #pt #pa #ro #ru #sm #gd #sr #st #sn #si #sk #sl #so #es #su #sw #sv #tl #tg #ta #tt #te #th #bo #tr #tk #ug #uk #ur #uz #vi #cy #wo #xh #yi #yo #zu #arxiv-2007.01852 #license-apache-2.0 #endpoints_compatible #region-us
|
# AviLaBSE
## Model description
This is a unified model trained over LaBSE by google LaBSE to add other row resourced language dimensions and then convereted to PyTorch. It can be used to map more than 250 languages to a shared vector space. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: HuggingFace's model hub.
- Paper: arXiv.
- Original TF model: TensorFlow Hub.
- Blog post: Google AI Blog.
- Developed by: Sartify LLC
## Usage
Using the model:
To get the sentence embeddings, use the pooler output:
Output for other row resourced languages:
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
## Full Model Architecture
| [
"# AviLaBSE",
"## Model description\n\nThis is a unified model trained over LaBSE by google LaBSE to add other row resourced language dimensions and then convereted to PyTorch. It can be used to map more than 250 languages to a shared vector space. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.\n\n- Model: HuggingFace's model hub.\n- Paper: arXiv.\n- Original TF model: TensorFlow Hub.\n- Blog post: Google AI Blog.\n- Developed by: Sartify LLC",
"## Usage\n\nUsing the model:\n\n\n\nTo get the sentence embeddings, use the pooler output:\n\n\n\nOutput for other row resourced languages:\n\n\n\nFor similarity between sentences, an L2-norm is recommended before calculating the similarity:",
"## Full Model Architecture"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #feature-extraction #sentence_embedding #multilingual #sartify #sentence-similarity #af #sq #am #ar #hy #as #az #eu #be #bn #bs #bg #my #ca #ceb #zh #co #hr #cs #da #nl #en #eo #et #fi #fr #fy #gl #ka #de #el #gu #ht #ha #haw #he #hi #hmn #hu #is #ig #id #ga #it #ja #jv #kn #kk #km #rw #ko #ku #ky #lo #la #lv #lt #lb #mk #mg #ms #ml #mt #mi #mr #mn #ne #no #ny #or #fa #pl #pt #pa #ro #ru #sm #gd #sr #st #sn #si #sk #sl #so #es #su #sw #sv #tl #tg #ta #tt #te #th #bo #tr #tk #ug #uk #ur #uz #vi #cy #wo #xh #yi #yo #zu #arxiv-2007.01852 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# AviLaBSE",
"## Model description\n\nThis is a unified model trained over LaBSE by google LaBSE to add other row resourced language dimensions and then convereted to PyTorch. It can be used to map more than 250 languages to a shared vector space. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.\n\n- Model: HuggingFace's model hub.\n- Paper: arXiv.\n- Original TF model: TensorFlow Hub.\n- Blog post: Google AI Blog.\n- Developed by: Sartify LLC",
"## Usage\n\nUsing the model:\n\n\n\nTo get the sentence embeddings, use the pooler output:\n\n\n\nOutput for other row resourced languages:\n\n\n\nFor similarity between sentences, an L2-norm is recommended before calculating the similarity:",
"## Full Model Architecture"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-finetuned-ner-2048
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- Recall: 0.9920
- Precision: 0.9866
- Fbeta Score: 0.9918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Precision | Fbeta Score |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:-----------:|
| 0.0085 | 1.0 | 3186 | 0.0058 | 0.9786 | 0.9693 | 0.9782 |
| 0.0032 | 2.0 | 6373 | 0.0036 | 0.9869 | 0.9764 | 0.9865 |
| 0.004 | 3.0 | 9559 | 0.0037 | 0.9791 | 0.9892 | 0.9795 |
| 0.0014 | 4.0 | 12746 | 0.0035 | 0.9908 | 0.9817 | 0.9905 |
| 0.0019 | 5.0 | 15932 | 0.0038 | 0.9903 | 0.9806 | 0.9899 |
| 0.0022 | 6.0 | 19119 | 0.0032 | 0.9929 | 0.9861 | 0.9927 |
| 0.0005 | 7.0 | 22305 | 0.0031 | 0.9906 | 0.9894 | 0.9905 |
| 0.0003 | 8.0 | 25492 | 0.0033 | 0.9915 | 0.9855 | 0.9913 |
| 0.0001 | 9.0 | 28678 | 0.0036 | 0.9920 | 0.9866 | 0.9918 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["recall", "precision"], "base_model": "microsoft/deberta-v3-small", "model-index": [{"name": "deberta-v3-small-finetuned-ner-2048", "results": []}]} | marksusol/deberta-v3-small-finetuned-ner-2048 | null | [
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:45:43+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #token-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us
| deberta-v3-small-finetuned-ner-2048
===================================
This model is a fine-tuned version of microsoft/deberta-v3-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0036
* Recall: 0.9920
* Precision: 0.9866
* Fbeta Score: 0.9918
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #token-classification #generated_from_trainer #base_model-microsoft/deberta-v3-small #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hitesh-s0lanki/my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4955
- Validation Loss: 1.6800
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3511 | 1.9197 | 0 |
| 1.7495 | 1.6800 | 1 |
| 1.4955 | 1.6800 | 2 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "hitesh-s0lanki/my_awesome_qa_model", "results": []}]} | hitesh-s0lanki/my_awesome_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:46:15+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| hitesh-s0lanki/my\_awesome\_qa\_model
=====================================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 1.4955
* Validation Loss: 1.6800
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Sao10K/Euryale-L2-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Euryale-L2-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Euryale-L2-70B-GGUF/resolve/main/Euryale-L2-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "Sao10K/Euryale-L2-70B", "quantized_by": "mradermacher"} | mradermacher/Euryale-L2-70B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Euryale-L2-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:47:49+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Sao10K/Euryale-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Sao10K/Euryale-L2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | ikerm11/FTmistral7bTest03 | null | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-16T15:48:16+00:00 | [] | [] | TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.01-len_4-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.01-len_4-filtered-negative", "results": []}]} | Shalazary/ruBert-base-sberquad-0.01-len_4-filtered-negative | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T15:51:18+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.01-len_4-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.01-len_4-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.01-len_4-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-24-layer](https://huggingface.co/Citaman/command-r-24-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-24-layer
layer_range: [0, 23]
- model: Citaman/command-r-24-layer
layer_range: [1, 24]
merge_method: slerp
base_model: Citaman/command-r-24-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-24-layer"]} | Citaman/command-r-23-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-24-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:51:40+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-24-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-24-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-24-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-24-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-24-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8881 | 1.0 | 250 | 0.3296 | 0.9075 | 0.9068 |
| 0.2673 | 2.0 | 500 | 0.2220 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9260176788678343, "name": "F1"}]}]}]} | cogsci13/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T15:52:03+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2220
* Accuracy: 0.926
* F1: 0.9260
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Lumina-3.5
Lumina-3.5 is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) with improved reasoning from [Lumina-3](https://huggingface.co/Ppoyaa/Lumina-3). This model uses a context window of up to 32k.
# Response Examples
``` Math
Input: 25 - 4 * 2 + 3 = ?
------------------------------------
Output: To solve this equation, we will follow the order of operations (PEMDAS/BODMAS). This means performing any multiplications and divisions first from left to right, and then additions and subtractions from left to right.
The given equation is: 25 - 4 * 2 + 3
First, let's perform the multiplication:
4 * 2 = 8
Now, our equation looks like: 25 - 8 + 3
Next, perform the subtraction:
25 - 8 = 17
Finally, do the addition:
17 + 3 = 20
So, the result of the equation 25 - 4 * 2 + 3 is 20.
```
``` Reasoning
Input: Jane is faster than Joe. Joe is faster than Sam. Is Sam faster than Jane? Explain your reasoning step by step.
------------------------------------
Output: To answer this question, let's analyze the given information step by step:
We are provided with a comparison between three individuals - Jane, Joe, and Sam - in terms of their relative speeds. The statements tell us the following about their ranks:
Jane > Joe (Jane is faster than Joe)
Joe > Sam (Joe is faster than Sam)
Now we need to determine if the opposite relationship holds true: whether Sam is faster than Jane or not. Based on the given comparisons, we know that both Jane and Joe have faster speeds compared to Sam. There isn't any piece of information suggesting otherwise.
Since the original premise established an ordered ranking among these individuals (Jane > Joe > Sam), and the direct opposite of the relationship between Sam and the other two is not present, we cannot conclude that Sam is faster than Jane.
Thus, based on the provided information, our reasoning confirms that Sam is not faster than Jane.
```
## 🏆 Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.41|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.82|
|MMLU (5-Shot) |64.48|
|TruthfulQA (0-shot) |75.66|
|Winogrande (5-shot) |83.98|
|GSM8k (5-shot) |67.93|
# Quants
Special thanks to GGUFs made by [mradermacher](https://huggingface.co/mradermacher)
* [mradermacher/Lumina-3.5-GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-3.5"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Ppoyaa__Lumina-3.5)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.41|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.82|
|MMLU (5-Shot) |64.48|
|TruthfulQA (0-shot) |75.66|
|Winogrande (5-shot) |83.98|
|GSM8k (5-shot) |67.93|
| {"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit"], "model-index": [{"name": "Lumina-3.5", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 71.59, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.82, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.48, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 75.66}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.98, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 67.93, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5", "name": "Open LLM Leaderboard"}}]}]} | Ppoyaa/Lumina-3.5 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T15:54:59+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Lumina-3.5
==========
Lumina-3.5 is a Mixture of Experts (MoE) made with LazyMergekit with improved reasoning from Lumina-3. This model uses a context window of up to 32k.
Response Examples
=================
Open LLM Leaderboard Evaluation Results
---------------------------------------
Quants
======
Special thanks to GGUFs made by mradermacher
* mradermacher/Lumina-3.5-GGUF
Usage
-----
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-intentv3.0
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-finetuned-intentv3.0", "results": []}]} | mohits01/phi-2-finetuned-intentv3.0 | null | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-16T15:55:48+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us
|
# phi-2-finetuned-intentv3.0
This model is a fine-tuned version of microsoft/phi-2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-2-finetuned-intentv3.0\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# phi-2-finetuned-intentv3.0\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "263.70 +/- 22.32", "name": "mean_reward", "verified": false}]}]}]} | KrisSommer/Moonlander_DRL_course | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-16T15:57:14+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-peft-cross-fyp
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1887 | 1.0 | 1984 | 0.2277 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "openai/whisper-small", "model-index": [{"name": "whisper-small-peft-cross-fyp", "results": []}]} | yaygomii/whisper-small-peft-cross-fyp | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T15:57:50+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-openai/whisper-small #license-apache-2.0 #region-us
| whisper-small-peft-cross-fyp
============================
This model is a fine-tuned version of openai/whisper-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2277
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.1.dev0
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-openai/whisper-small #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7b-sft-lora-ultrachat
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.3808 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7b-sft-lora-ultrachat", "results": []}]} | APaul1/Mistral-7b-sft-lora-ultrachat | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T16:00:47+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| Mistral-7b-sft-lora-ultrachat
=============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 128
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tapt_helpfulness_pretraining_model_final
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 21
- eval_batch_size: 21
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.7974 | 1.0 | 232 | 2.5728 |
| 2.4546 | 2.0 | 465 | 2.1447 |
| 2.1813 | 3.0 | 697 | 1.9812 |
| 2.0532 | 4.0 | 930 | 1.8776 |
| 1.9775 | 5.0 | 1162 | 1.8295 |
| 1.9073 | 6.0 | 1395 | 1.7695 |
| 1.88 | 7.0 | 1627 | 1.7444 |
| 1.8393 | 8.0 | 1860 | 1.7227 |
| 1.8182 | 9.0 | 2092 | 1.6952 |
| 1.7903 | 10.0 | 2325 | 1.6860 |
| 1.7805 | 11.0 | 2557 | 1.6647 |
| 1.7604 | 12.0 | 2790 | 1.6551 |
| 1.7472 | 13.0 | 3022 | 1.6403 |
| 1.7328 | 14.0 | 3255 | 1.6328 |
| 1.7251 | 15.0 | 3487 | 1.6339 |
| 1.704 | 16.0 | 3720 | 1.6132 |
| 1.7113 | 17.0 | 3952 | 1.6026 |
| 1.6898 | 18.0 | 4185 | 1.5934 |
| 1.692 | 19.0 | 4417 | 1.6081 |
| 1.6787 | 20.0 | 4650 | 1.5891 |
| 1.679 | 21.0 | 4882 | 1.5877 |
| 1.6632 | 22.0 | 5115 | 1.5764 |
| 1.6674 | 23.0 | 5347 | 1.5962 |
| 1.6627 | 24.0 | 5580 | 1.5759 |
| 1.6613 | 25.0 | 5812 | 1.5627 |
| 1.6421 | 26.0 | 6045 | 1.5636 |
| 1.6495 | 27.0 | 6277 | 1.5589 |
| 1.632 | 28.0 | 6510 | 1.5722 |
| 1.6343 | 29.0 | 6742 | 1.5717 |
| 1.638 | 30.0 | 6975 | 1.5477 |
| 1.6327 | 31.0 | 7207 | 1.5498 |
| 1.6218 | 32.0 | 7440 | 1.5496 |
| 1.6258 | 33.0 | 7672 | 1.5408 |
| 1.6205 | 34.0 | 7905 | 1.5361 |
| 1.6208 | 35.0 | 8137 | 1.5435 |
| 1.6105 | 36.0 | 8370 | 1.5285 |
| 1.6168 | 37.0 | 8602 | 1.5367 |
| 1.605 | 38.0 | 8835 | 1.5390 |
| 1.6139 | 39.0 | 9067 | 1.5382 |
| 1.599 | 40.0 | 9300 | 1.5283 |
| 1.6045 | 41.0 | 9532 | 1.5415 |
| 1.5947 | 42.0 | 9765 | 1.5284 |
| 1.6015 | 43.0 | 9997 | 1.5302 |
| 1.5952 | 44.0 | 10230 | 1.5434 |
| 1.5996 | 45.0 | 10462 | 1.5372 |
| 1.5924 | 46.0 | 10695 | 1.5147 |
| 1.5924 | 47.0 | 10927 | 1.5154 |
| 1.5867 | 48.0 | 11160 | 1.5339 |
| 1.5872 | 49.0 | 11392 | 1.5285 |
| 1.5829 | 50.0 | 11625 | 1.5139 |
| 1.5907 | 51.0 | 11857 | 1.5197 |
| 1.5841 | 52.0 | 12090 | 1.5263 |
| 1.5862 | 53.0 | 12322 | 1.4993 |
| 1.5728 | 54.0 | 12555 | 1.5045 |
| 1.5825 | 55.0 | 12787 | 1.5078 |
| 1.5729 | 56.0 | 13020 | 1.5030 |
| 1.5826 | 57.0 | 13252 | 1.5112 |
| 1.5718 | 58.0 | 13485 | 1.5157 |
| 1.5807 | 59.0 | 13717 | 1.5248 |
| 1.5725 | 60.0 | 13950 | 1.5161 |
| 1.5735 | 61.0 | 14182 | 1.5059 |
| 1.5694 | 62.0 | 14415 | 1.5198 |
| 1.5709 | 63.0 | 14647 | 1.5046 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "tapt_helpfulness_pretraining_model_final", "results": []}]} | BigTMiami/tapt_helpfulness_pretraining_model_final | null | [
"tensorboard",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"region:us"
] | null | 2024-04-16T16:01:13+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us
| tapt\_helpfulness\_pretraining\_model\_final
============================================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5048
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 21
* eval\_batch\_size: 21
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 100
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** rayyildiz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"} | rayyildiz/lora_tinyllama | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:02:08+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: rayyildiz
- License: apache-2.0
- Finetuned from model : unsloth/tinyllama-bnb-4bit
| [
"# Uploaded model\n\n- Developed by: rayyildiz\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: rayyildiz\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit"
] |
sentence-similarity | sentence-transformers |
# sbastola/muril-base-cased-sentence-transformer-snli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sbastola/muril-base-cased-sentence-transformer-snli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli')
model = AutoModel.from_pretrained('sbastola/muril-base-cased-sentence-transformer-snli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sbastola/muril-base-cased-sentence-transformer-snli)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 430 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.LabelAccuracyEvaluator.LabelAccuracyEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 215,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["stanfordnlp/snli"], "pipeline_tag": "sentence-similarity"} | sbastola/muril-base-cased-sentence-transformer-snli | null | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:stanfordnlp/snli",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:02:11+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us
|
# sbastola/muril-base-cased-sentence-transformer-snli
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 430 with parameters:
Loss:
'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# sbastola/muril-base-cased-sentence-transformer-snli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 430 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-stanfordnlp/snli #endpoints_compatible #region-us \n",
"# sbastola/muril-base-cased-sentence-transformer-snli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 430 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | transformers |
# Model Card for Starcoder2-15B Fine-Tuned with QLORA on Solidity Dataset
## Model Details
### Model Description
This model has been fine-tuned using QLORA to generate Solidity smart contracts from natural language instructions. Training was done using the same dataset used to train [https://huggingface.co/yoniebans/starcoder2-3b-qlora-solidity](starcoder2-3b-qlora-solidity). This should allow for a direct comparison between the two models. The `starcoder2-15b` base model benefits from having being trained on a lot more code that also included solidity code for reference.
- **Developed by:** @yoniebans
- **Model type:** Transformer-based Causal Language Model
- **Language(s) (NLP):** English with programming language syntax (Solidity)
- **License:** [More Information Needed]
- **Finetuned from model** bigcode/starcoder2-15b
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
This model was created to demonstrate how fine-tuning a base model such as starcoder2-15b using qlora can increase the model's ability to create Solidity code.
### Out-of-Scope Use
This model is not intended for:
- Deployment in production systems without rigorous testing.
- Use in non-technical text generation or any context outside smart contract development.
## Bias, Risks, and Limitations
The training data consists of code from publicly sourced Solidity projects which may not encompass the full diversity of programming styles and techniques. The Solidity source code originates from mwritescode's Slither Audited Smart Contracts (https://huggingface.co/datasets/mwritescode/slither-audited-smart-contracts).
### Recommendations
Users are advised to use this model as a starting point for development and not as a definitive solution. Generated code should always be reviewed by experienced developers to ensure security and functionality.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import sys, torch, accelerate
from peft import PeftModel
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM
use_4bit = True
bnb_4bit_compute_dtype = "float32"
bnb_4bit_quant_type = "nf4"
use_double_nested_quant = True
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
device_map = "auto"
max_memory = '75000MB'
n_gpus = torch.cuda.device_count()
max_memory = {i: max_memory for i in range(n_gpus)}
checkpoint = "bigcode/starcoder2-15b"
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
cache_dir=None,
device_map=device_map,
max_memory=max_memory,
quantization_config=BitsAndBytesConfig(
load_in_4bit=use_4bit,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_double_nested_quant,
bnb_4bit_quant_type=bnb_4bit_quant_type
),
torch_dtype=torch.float32,
trust_remote_code=False
)
tokenizer = AutoTokenizer.from_pretrained(
checkpoint,
cache_dir=None,
padding_side="right",
use_fast=False,
tokenizer_type=None, # Needed for HF name change
trust_remote_code=False,
use_auth_token=False,
)
if tokenizer._pad_token is None:
num_new_tokens = tokenizer.add_special_tokens(dict(pad_token="[PAD]"))
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings_data = model.get_input_embeddings().weight.data
output_embeddings_data = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings_data[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings_data[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings_data[-num_new_tokens:] = input_embeddings_avg
output_embeddings_data[-num_new_tokens:] = output_embeddings_avg
local_adapter_weights = "yoniebans/starcoder2-15b-qlora-solidity"
model = PeftModel.from_pretrained(model, local_adapter_weights)
input='Make a smart contract for a memecoin named 'LLMAI', adhering to the ERC20 standard. The contract should enforce a purchase limit where no individual wallet can acquire more than 1% of the total token supply, which is set at 10 billion tokens. This purchasing limit should be modifiable and can only be disabled by the contract owner at their discretion. Note that the interfaces for ERC20, Ownable, and any other dependencies should be assumed as already imported and do not need to be included in your code response.'
prompt = f"""### Instruction:
Use the Task below and the Input given to write the Response, which is a programming code that can solve the following Task:
### Task:
{input}
### Solution:
"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
outputs = model.generate(
input_ids=input_ids,
max_new_tokens=2048,
do_sample=True,
top_p=0.9,
temperature=0.001,
pad_token_id=1
)
output_text = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
output_text_without_prompt = output_text[len(prompt):]
file_path = './smart_contract.sol'
with open(file_path, 'w') as file:
file.write(output_text_without_prompt)
print(f"Output written to {file_path}")
```
## Training Details
### Training Data
The model was trained on a dataset consisting of pairs of natural language instructions and their corresponding Solidity code implementations. This dataset includes over 6000 input and outputs.
### Training Procedure
#### Training Hyperparameters
- Number of train epochs: 6
- Double quant: true
- Quant type: nf4
- Bits: 4
- Lora r: 64
- Lora aplha: 16
- Lora dropout: 0.0
- Per device train batch size: 1
- Gradient accumulation steps: 1
- Max steps: 2000
- Weight decay: 0.0 #use lora dropout instead for regularization if needed
- Learning rate: 2e-4
- Max gradient normal: 0.3
- Gradient checkpointing: true
- FP16: false
- FP16 option level: O1
- BF16: false
- Optimizer: paged AdamW 32-bit
- Learning rate scheduler type: constant
- Warmup ratio: 0.03
#### Speeds, Sizes, Times [optional]
| Epoch | Grad Norm | Loss | Step |
|-------|-----------|------|------|
| 0.03 | 0.299587 | 0.9556 | 10 |
| 0.30 | 0.496698 | 0.6772 | 100 |
| 0.59 | 0.249761 | 0.5784 | 200 |
| 0.89 | 0.233806 | 0.6166 | 300 |
| 1.18 | 0.141580 | 0.3541 | 400 |
| 1.48 | 0.129458 | 0.3517 | 500 |
| 1.78 | 0.114603 | 0.3793 | 600 |
| 2.07 | 0.085970 | 0.3937 | 700 |
| 2.37 | 0.085016 | 0.3209 | 800 |
| 2.67 | 0.097650 | 0.3716 | 900 |
| 2.96 | 0.093905 | 0.3437 | 1000 |
| 3.26 | 0.137684 | 0.3026 | 1100 |
| 3.55 | 0.137903 | 0.3432 | 1200 |
| 3.85 | 0.103362 | 0.3310 | 1300 |
| 4.15 | 0.174095 | 0.3567 | 1400 |
| 4.44 | 0.203337 | 0.3279 | 1500 |
| 4.74 | 0.229325 | 0.4026 | 1600 |
| 5.04 | 0.134137 | 0.1737 | 1700 |
| 5.33 | 0.113009 | 0.2132 | 1800 |
| 5.63 | 0.066551 | 0.2207 | 1900 |
| 5.92 | 0.136091 | 0.2193 | 2000 |

### Results
#### Summary
Initial evaluations show promising results in generating Solidity code. More work required to understand effectiveness of input promt and max_length for training and inference.
## Environmental Impact
- **Hardware Type:** NVIDIA A100 • 80GB (80 GB VRAM) • 117 GB RAM • 8 vCPU
- **Hours used:** 8
- **Cloud Provider:** https://www.runpod.io
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Citation [optional]
**BibTeX:**
```
@misc{lozhkov2024starcoder,
title={StarCoder 2 and The Stack v2: The Next Generation},
author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2024},
eprint={2402.19173},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
**APA:**
Starcoder2-15B: BigCode Team. (2024). Starcoder2-15B [Software]. Available from https://huggingface.co/bigcode/starcoder2-15b
AlfredPros Smart Contracts Instructions: AlfredPros. (2023). Smart Contracts Instructions [Data set]. Available from https://huggingface.co/datasets/AlfredPros/smart-contracts-instructions
## Model Card Contact
[email protected] | {"language": ["en"], "license": "bigcode-openrail-m", "library_name": "transformers", "datasets": ["AlfredPros/smart-contracts-instructions"]} | yoniebans/starcoder2-15b-qlora-solidity | null | [
"transformers",
"safetensors",
"en",
"dataset:AlfredPros/smart-contracts-instructions",
"arxiv:2402.19173",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:02:51+00:00 | [
"2402.19173"
] | [
"en"
] | TAGS
#transformers #safetensors #en #dataset-AlfredPros/smart-contracts-instructions #arxiv-2402.19173 #license-bigcode-openrail-m #endpoints_compatible #region-us
| Model Card for Starcoder2-15B Fine-Tuned with QLORA on Solidity Dataset
=======================================================================
Model Details
-------------
### Model Description
This model has been fine-tuned using QLORA to generate Solidity smart contracts from natural language instructions. Training was done using the same dataset used to train URL This should allow for a direct comparison between the two models. The 'starcoder2-15b' base model benefits from having being trained on a lot more code that also included solidity code for reference.
* Developed by: @yoniebans
* Model type: Transformer-based Causal Language Model
* Language(s) (NLP): English with programming language syntax (Solidity)
* License:
* Finetuned from model bigcode/starcoder2-15b
### Model Sources [optional]
* Repository:
* Demo [optional]:
Uses
----
### Direct Use
This model was created to demonstrate how fine-tuning a base model such as starcoder2-15b using qlora can increase the model's ability to create Solidity code.
### Out-of-Scope Use
This model is not intended for:
* Deployment in production systems without rigorous testing.
* Use in non-technical text generation or any context outside smart contract development.
Bias, Risks, and Limitations
----------------------------
The training data consists of code from publicly sourced Solidity projects which may not encompass the full diversity of programming styles and techniques. The Solidity source code originates from mwritescode's Slither Audited Smart Contracts (URL
### Recommendations
Users are advised to use this model as a starting point for development and not as a definitive solution. Generated code should always be reviewed by experienced developers to ensure security and functionality.
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model.
Training Details
----------------
### Training Data
The model was trained on a dataset consisting of pairs of natural language instructions and their corresponding Solidity code implementations. This dataset includes over 6000 input and outputs.
### Training Procedure
#### Training Hyperparameters
* Number of train epochs: 6
* Double quant: true
* Quant type: nf4
* Bits: 4
* Lora r: 64
* Lora aplha: 16
* Lora dropout: 0.0
* Per device train batch size: 1
* Gradient accumulation steps: 1
* Max steps: 2000
* Weight decay: 0.0 #use lora dropout instead for regularization if needed
* Learning rate: 2e-4
* Max gradient normal: 0.3
* Gradient checkpointing: true
* FP16: false
* FP16 option level: O1
* BF16: false
* Optimizer: paged AdamW 32-bit
* Learning rate scheduler type: constant
* Warmup ratio: 0.03
#### Speeds, Sizes, Times [optional]
!image/png
### Results
#### Summary
Initial evaluations show promising results in generating Solidity code. More work required to understand effectiveness of input promt and max\_length for training and inference.
Environmental Impact
--------------------
* Hardware Type: NVIDIA A100 • 80GB (80 GB VRAM) • 117 GB RAM • 8 vCPU
* Hours used: 8
* Cloud Provider: URL
* Compute Region:
* Carbon Emitted:
[optional]
BibTeX:
APA:
Starcoder2-15B: BigCode Team. (2024). Starcoder2-15B [Software]. Available from URL
AlfredPros Smart Contracts Instructions: AlfredPros. (2023). Smart Contracts Instructions [Data set]. Available from URL
Model Card Contact
------------------
jonny@URL
| [
"### Model Description\n\n\nThis model has been fine-tuned using QLORA to generate Solidity smart contracts from natural language instructions. Training was done using the same dataset used to train URL This should allow for a direct comparison between the two models. The 'starcoder2-15b' base model benefits from having being trained on a lot more code that also included solidity code for reference.\n\n\n* Developed by: @yoniebans\n* Model type: Transformer-based Causal Language Model\n* Language(s) (NLP): English with programming language syntax (Solidity)\n* License:\n* Finetuned from model bigcode/starcoder2-15b",
"### Model Sources [optional]\n\n\n* Repository:\n* Demo [optional]:\n\n\nUses\n----",
"### Direct Use\n\n\nThis model was created to demonstrate how fine-tuning a base model such as starcoder2-15b using qlora can increase the model's ability to create Solidity code.",
"### Out-of-Scope Use\n\n\nThis model is not intended for:\n\n\n* Deployment in production systems without rigorous testing.\n* Use in non-technical text generation or any context outside smart contract development.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nThe training data consists of code from publicly sourced Solidity projects which may not encompass the full diversity of programming styles and techniques. The Solidity source code originates from mwritescode's Slither Audited Smart Contracts (URL",
"### Recommendations\n\n\nUsers are advised to use this model as a starting point for development and not as a definitive solution. Generated code should always be reviewed by experienced developers to ensure security and functionality.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nThe model was trained on a dataset consisting of pairs of natural language instructions and their corresponding Solidity code implementations. This dataset includes over 6000 input and outputs.",
"### Training Procedure",
"#### Training Hyperparameters\n\n\n* Number of train epochs: 6\n* Double quant: true\n* Quant type: nf4\n* Bits: 4\n* Lora r: 64\n* Lora aplha: 16\n* Lora dropout: 0.0\n* Per device train batch size: 1\n* Gradient accumulation steps: 1\n* Max steps: 2000\n* Weight decay: 0.0 #use lora dropout instead for regularization if needed\n* Learning rate: 2e-4\n* Max gradient normal: 0.3\n* Gradient checkpointing: true\n* FP16: false\n* FP16 option level: O1\n* BF16: false\n* Optimizer: paged AdamW 32-bit\n* Learning rate scheduler type: constant\n* Warmup ratio: 0.03",
"#### Speeds, Sizes, Times [optional]\n\n\n\n!image/png",
"### Results",
"#### Summary\n\n\nInitial evaluations show promising results in generating Solidity code. More work required to understand effectiveness of input promt and max\\_length for training and inference.\n\n\nEnvironmental Impact\n--------------------\n\n\n* Hardware Type: NVIDIA A100 • 80GB (80 GB VRAM) • 117 GB RAM • 8 vCPU\n* Hours used: 8\n* Cloud Provider: URL\n* Compute Region:\n* Carbon Emitted:\n\n\n[optional]\n\n\nBibTeX:\n\n\nAPA:\n\n\nStarcoder2-15B: BigCode Team. (2024). Starcoder2-15B [Software]. Available from URL\n\n\nAlfredPros Smart Contracts Instructions: AlfredPros. (2023). Smart Contracts Instructions [Data set]. Available from URL\n\n\nModel Card Contact\n------------------\n\n\njonny@URL"
] | [
"TAGS\n#transformers #safetensors #en #dataset-AlfredPros/smart-contracts-instructions #arxiv-2402.19173 #license-bigcode-openrail-m #endpoints_compatible #region-us \n",
"### Model Description\n\n\nThis model has been fine-tuned using QLORA to generate Solidity smart contracts from natural language instructions. Training was done using the same dataset used to train URL This should allow for a direct comparison between the two models. The 'starcoder2-15b' base model benefits from having being trained on a lot more code that also included solidity code for reference.\n\n\n* Developed by: @yoniebans\n* Model type: Transformer-based Causal Language Model\n* Language(s) (NLP): English with programming language syntax (Solidity)\n* License:\n* Finetuned from model bigcode/starcoder2-15b",
"### Model Sources [optional]\n\n\n* Repository:\n* Demo [optional]:\n\n\nUses\n----",
"### Direct Use\n\n\nThis model was created to demonstrate how fine-tuning a base model such as starcoder2-15b using qlora can increase the model's ability to create Solidity code.",
"### Out-of-Scope Use\n\n\nThis model is not intended for:\n\n\n* Deployment in production systems without rigorous testing.\n* Use in non-technical text generation or any context outside smart contract development.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nThe training data consists of code from publicly sourced Solidity projects which may not encompass the full diversity of programming styles and techniques. The Solidity source code originates from mwritescode's Slither Audited Smart Contracts (URL",
"### Recommendations\n\n\nUsers are advised to use this model as a starting point for development and not as a definitive solution. Generated code should always be reviewed by experienced developers to ensure security and functionality.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nThe model was trained on a dataset consisting of pairs of natural language instructions and their corresponding Solidity code implementations. This dataset includes over 6000 input and outputs.",
"### Training Procedure",
"#### Training Hyperparameters\n\n\n* Number of train epochs: 6\n* Double quant: true\n* Quant type: nf4\n* Bits: 4\n* Lora r: 64\n* Lora aplha: 16\n* Lora dropout: 0.0\n* Per device train batch size: 1\n* Gradient accumulation steps: 1\n* Max steps: 2000\n* Weight decay: 0.0 #use lora dropout instead for regularization if needed\n* Learning rate: 2e-4\n* Max gradient normal: 0.3\n* Gradient checkpointing: true\n* FP16: false\n* FP16 option level: O1\n* BF16: false\n* Optimizer: paged AdamW 32-bit\n* Learning rate scheduler type: constant\n* Warmup ratio: 0.03",
"#### Speeds, Sizes, Times [optional]\n\n\n\n!image/png",
"### Results",
"#### Summary\n\n\nInitial evaluations show promising results in generating Solidity code. More work required to understand effectiveness of input promt and max\\_length for training and inference.\n\n\nEnvironmental Impact\n--------------------\n\n\n* Hardware Type: NVIDIA A100 • 80GB (80 GB VRAM) • 117 GB RAM • 8 vCPU\n* Hours used: 8\n* Cloud Provider: URL\n* Compute Region:\n* Carbon Emitted:\n\n\n[optional]\n\n\nBibTeX:\n\n\nAPA:\n\n\nStarcoder2-15B: BigCode Team. (2024). Starcoder2-15B [Software]. Available from URL\n\n\nAlfredPros Smart Contracts Instructions: AlfredPros. (2023). Smart Contracts Instructions [Data set]. Available from URL\n\n\nModel Card Contact\n------------------\n\n\njonny@URL"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1205
- Precision: 0.9307
- Recall: 0.9389
- F1: 0.9348
- Accuracy: 0.9816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3889 | 1.0 | 477 | 0.0832 | 0.8808 | 0.8987 | 0.8897 | 0.9743 |
| 0.0736 | 2.0 | 954 | 0.0703 | 0.9170 | 0.9226 | 0.9198 | 0.9796 |
| 0.0361 | 3.0 | 1431 | 0.0784 | 0.9227 | 0.9321 | 0.9274 | 0.9801 |
| 0.0216 | 4.0 | 1908 | 0.0863 | 0.9235 | 0.9328 | 0.9281 | 0.9801 |
| 0.0116 | 5.0 | 2385 | 0.0977 | 0.9292 | 0.9371 | 0.9332 | 0.9809 |
| 0.007 | 6.0 | 2862 | 0.1071 | 0.9270 | 0.9356 | 0.9313 | 0.9808 |
| 0.0046 | 7.0 | 3339 | 0.1123 | 0.9322 | 0.9378 | 0.9350 | 0.9818 |
| 0.0029 | 8.0 | 3816 | 0.1179 | 0.9310 | 0.9371 | 0.9340 | 0.9814 |
| 0.0021 | 9.0 | 4293 | 0.1187 | 0.9293 | 0.9375 | 0.9334 | 0.9812 |
| 0.0013 | 10.0 | 4770 | 0.1205 | 0.9307 | 0.9389 | 0.9348 | 0.9816 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"language": ["mn"], "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bayartsogt/mongolian-roberta-base", "model-index": [{"name": "roberta-base-ner-demo", "results": []}]} | Gausar/roberta-base-ner-demo | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"base_model:bayartsogt/mongolian-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:06:04+00:00 | [] | [
"mn"
] | TAGS
#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #mn #base_model-bayartsogt/mongolian-roberta-base #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-ner-demo
=====================
This model is a fine-tuned version of bayartsogt/mongolian-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1205
* Precision: 0.9307
* Recall: 0.9389
* F1: 0.9348
* Accuracy: 0.9816
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #mn #base_model-bayartsogt/mongolian-roberta-base #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# WizardLM-2-8x22B - EXL2 3.25bpw
This is a 3.25bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"} | Dracones/WizardLM-2-8x22B_exl2_3.25bpw | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:07:03+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| WizardLM-2-8x22B - EXL2 3.25bpw
===============================
This is a 3.25bpw EXL2 quant of microsoft/WizardLM-2-8x22B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Ecommerce Product Description Translation Model (Simplified Chinese to English)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
A finetuned version of mistralai/Mistral-7B-v0.1 for mandarin to english e-commerce product translations.
- **Developed by:** [Noah Statton]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Adapter for Mistral-7B-v0.1]
- **Language(s) (NLP):** [English, 中文(简体)]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [Mistral-7B-v0.1]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"language": ["en", "zh"], "license": "mit", "library_name": "peft", "datasets": ["CodeJesus77/FashionMMT-Translations"], "metrics": ["bleu"], "base_model": "mistralai/Mistral-7B-v0.1", "widget": [{"messages": [{"role": "user", "content": "\u9ad8\u54c1\u8d28\u7684\u76ae\u9769\u7ed3\u6784\u5728\u575a\u97e7\u7684\u51f8\u8033\u978b\u5e95\u4e0a\u6253\u9020\u51fa\u4e00\u6b3e\u7ed3\u5b9e\u7684\u9774\u5b50"}]}], "pipeline_tag": "text-generation"} | INoahGuy77/ProdFanYi | null | [
"peft",
"safetensors",
"mistral",
"text-generation",
"en",
"zh",
"dataset:CodeJesus77/FashionMMT-Translations",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"region:us"
] | null | 2024-04-16T16:10:13+00:00 | [
"1910.09700"
] | [
"en",
"zh"
] | TAGS
#peft #safetensors #mistral #text-generation #en #zh #dataset-CodeJesus77/FashionMMT-Translations #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #license-mit #region-us
|
# Model Card for Model ID
Ecommerce Product Description Translation Model (Simplified Chinese to English)
## Model Details
### Model Description
A finetuned version of mistralai/Mistral-7B-v0.1 for mandarin to english e-commerce product translations.
- Developed by: [Noah Statton]
- Funded by [optional]:
- Shared by [optional]:
- Model type: [Adapter for Mistral-7B-v0.1]
- Language(s) (NLP): [English, 中文(简体)]
- License:
- Finetuned from model [optional]: [Mistral-7B-v0.1]
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID\n\n\nEcommerce Product Description Translation Model (Simplified Chinese to English)",
"## Model Details",
"### Model Description\n\n\n\nA finetuned version of mistralai/Mistral-7B-v0.1 for mandarin to english e-commerce product translations. \n\n- Developed by: [Noah Statton]\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: [Adapter for Mistral-7B-v0.1]\n- Language(s) (NLP): [English, 中文(简体)]\n- License: \n- Finetuned from model [optional]: [Mistral-7B-v0.1]",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #mistral #text-generation #en #zh #dataset-CodeJesus77/FashionMMT-Translations #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #license-mit #region-us \n",
"# Model Card for Model ID\n\n\nEcommerce Product Description Translation Model (Simplified Chinese to English)",
"## Model Details",
"### Model Description\n\n\n\nA finetuned version of mistralai/Mistral-7B-v0.1 for mandarin to english e-commerce product translations. \n\n- Developed by: [Noah Statton]\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: [Adapter for Mistral-7B-v0.1]\n- Language(s) (NLP): [English, 中文(简体)]\n- License: \n- Finetuned from model [optional]: [Mistral-7B-v0.1]",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | Skieriya/winners | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:14:08+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #llama #text-generation #autotrain #text-generation-inference #peft #conversational #license-other #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #pytorch #safetensors #llama #text-generation #autotrain #text-generation-inference #peft #conversational #license-other #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7673 | 0.62 | 100 | 0.6592 |
| 0.6772 | 1.25 | 200 | 0.5996 |
| 0.6158 | 1.87 | 300 | 0.5422 |
| 0.5686 | 2.5 | 400 | 0.5117 |
| 0.551 | 3.12 | 500 | 0.5027 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["voxpopuli"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "speecht5_finetuned_voxpopuli_nl", "results": []}]} | snivi1411/speecht5_finetuned_voxpopuli_nl | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:16:10+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #dataset-voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
| speecht5\_finetuned\_voxpopuli\_nl
==================================
This model is a fine-tuned version of microsoft/speecht5\_tts on the voxpopuli dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5027
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 500
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 500\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #dataset-voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 500\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Wiz2Beagle-7b-v1
Hey there! 👋 Welcome to the Wiz2Beagle-7b-v1! This is a merge of multiple models brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl).
Let's see what we've got in this merge:
* [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) 🚀
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) 🚀
## 🧩 Configuration
```yaml
models:
- model: amazingvince/Not-WizardLM-2-7B
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
merge_method: ties
base_model: amazingvince/Not-WizardLM-2-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "vortexmergekit", "amazingvince/Not-WizardLM-2-7B", "mlabonne/NeuralBeagle14-7B"]} | eldogbbhed/Wiz2Beagle-7b-v1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"vortexmergekit",
"amazingvince/Not-WizardLM-2-7B",
"mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:16:32+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #vortexmergekit #amazingvince/Not-WizardLM-2-7B #mlabonne/NeuralBeagle14-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Wiz2Beagle-7b-v1
Hey there! Welcome to the Wiz2Beagle-7b-v1! This is a merge of multiple models brought together using the awesome VortexMerge kit.
Let's see what we've got in this merge:
* amazingvince/Not-WizardLM-2-7B
* mlabonne/NeuralBeagle14-7B
## Configuration
'''yaml
models:
- model: amazingvince/Not-WizardLM-2-7B
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
merge_method: ties
base_model: amazingvince/Not-WizardLM-2-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
| [
"# Wiz2Beagle-7b-v1\n\nHey there! Welcome to the Wiz2Beagle-7b-v1! This is a merge of multiple models brought together using the awesome VortexMerge kit.\n\nLet's see what we've got in this merge:\n* amazingvince/Not-WizardLM-2-7B \n* mlabonne/NeuralBeagle14-7B",
"## Configuration\n\n'''yaml\nmodels:\n - model: amazingvince/Not-WizardLM-2-7B\n parameters:\n density: [1, 0.7, 0.1] # density gradient\n weight: 1.0\n - model: mlabonne/NeuralBeagle14-7B\n parameters:\n density: 0.5\n weight: [0, 0.3, 0.7, 1] # weight gradient\nmerge_method: ties\nbase_model: amazingvince/Not-WizardLM-2-7B\nparameters:\n normalize: true\n int8_mask: true\ndtype: float16"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #vortexmergekit #amazingvince/Not-WizardLM-2-7B #mlabonne/NeuralBeagle14-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Wiz2Beagle-7b-v1\n\nHey there! Welcome to the Wiz2Beagle-7b-v1! This is a merge of multiple models brought together using the awesome VortexMerge kit.\n\nLet's see what we've got in this merge:\n* amazingvince/Not-WizardLM-2-7B \n* mlabonne/NeuralBeagle14-7B",
"## Configuration\n\n'''yaml\nmodels:\n - model: amazingvince/Not-WizardLM-2-7B\n parameters:\n density: [1, 0.7, 0.1] # density gradient\n weight: 1.0\n - model: mlabonne/NeuralBeagle14-7B\n parameters:\n density: 0.5\n weight: [0, 0.3, 0.7, 1] # weight gradient\nmerge_method: ties\nbase_model: amazingvince/Not-WizardLM-2-7B\nparameters:\n normalize: true\n int8_mask: true\ndtype: float16"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-23-layer](https://huggingface.co/Citaman/command-r-23-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-23-layer
layer_range: [0, 22]
- model: Citaman/command-r-23-layer
layer_range: [1, 23]
merge_method: slerp
base_model: Citaman/command-r-23-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-23-layer"]} | Citaman/command-r-22-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-23-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:17:36+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-23-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-23-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-23-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-23-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-23-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_usp1_dpo5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6912
- Rewards/chosen: -8.0278
- Rewards/rejected: -12.3890
- Rewards/accuracies: 0.7100
- Rewards/margins: 4.3613
- Logps/rejected: -138.5718
- Logps/chosen: -128.7695
- Logits/rejected: -0.7832
- Logits/chosen: -0.7072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.1062 | 2.67 | 100 | 0.9731 | -1.3173 | -2.6971 | 0.6900 | 1.3798 | -119.1878 | -115.3486 | -0.0430 | -0.0352 |
| 0.0053 | 5.33 | 200 | 1.4158 | -3.1407 | -7.2838 | 0.6700 | 4.1431 | -128.3614 | -118.9955 | -0.6209 | -0.5447 |
| 0.0359 | 8.0 | 300 | 1.6639 | -5.1494 | -9.1017 | 0.6900 | 3.9524 | -131.9972 | -123.0127 | -0.7498 | -0.6842 |
| 0.0 | 10.67 | 400 | 1.6816 | -7.9965 | -12.3589 | 0.7000 | 4.3624 | -138.5115 | -128.7070 | -0.7830 | -0.7067 |
| 0.0 | 13.33 | 500 | 1.6979 | -8.0167 | -12.3847 | 0.7000 | 4.3680 | -138.5631 | -128.7474 | -0.7829 | -0.7068 |
| 0.0 | 16.0 | 600 | 1.6968 | -8.0295 | -12.3907 | 0.7000 | 4.3611 | -138.5751 | -128.7731 | -0.7829 | -0.7066 |
| 0.0 | 18.67 | 700 | 1.6993 | -8.0304 | -12.3986 | 0.7000 | 4.3682 | -138.5910 | -128.7749 | -0.7833 | -0.7066 |
| 0.0 | 21.33 | 800 | 1.6953 | -8.0311 | -12.4055 | 0.7000 | 4.3743 | -138.6046 | -128.7763 | -0.7835 | -0.7067 |
| 0.0 | 24.0 | 900 | 1.6919 | -8.0283 | -12.4122 | 0.7100 | 4.3839 | -138.6181 | -128.7706 | -0.7836 | -0.7075 |
| 0.0 | 26.67 | 1000 | 1.6912 | -8.0278 | -12.3890 | 0.7100 | 4.3613 | -138.5718 | -128.7695 | -0.7832 | -0.7072 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp1_dpo5", "results": []}]} | guoyu-zhang/model_hh_usp1_dpo5 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T16:19:43+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_usp1\_dpo5
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6912
* Rewards/chosen: -8.0278
* Rewards/rejected: -12.3890
* Rewards/accuracies: 0.7100
* Rewards/margins: 4.3613
* Logps/rejected: -138.5718
* Logps/chosen: -128.7695
* Logits/rejected: -0.7832
* Logits/chosen: -0.7072
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jspetrisko/mistral-7b-instruct-new-sql | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:22:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5
This model is a fine-tuned version of [nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4](https://huggingface.co/nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4", "model-index": [{"name": "detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5", "results": []}]} | nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5 | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:23:06+00:00 | [] | [] | TAGS
#transformers #safetensors #detr #object-detection #generated_from_trainer #base_model-nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4 #endpoints_compatible #region-us
|
# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5
This model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5\n\nThis model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #generated_from_trainer #base_model-nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4 #endpoints_compatible #region-us \n",
"# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session5\n\nThis model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session4 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-22-layer](https://huggingface.co/Citaman/command-r-22-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-22-layer
layer_range: [0, 21]
- model: Citaman/command-r-22-layer
layer_range: [1, 22]
merge_method: slerp
base_model: Citaman/command-r-22-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-22-layer"]} | Citaman/command-r-21-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-22-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:25:14+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-22-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-22-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-22-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-22-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-22-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0415B1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7238 | 0.09 | 10 | 2.4835 |
| 2.0622 | 0.18 | 20 | 1.3199 |
| 0.7393 | 0.27 | 30 | 0.1436 |
| 0.1385 | 0.36 | 40 | 0.0987 |
| 0.1079 | 0.45 | 50 | 0.0873 |
| 0.0941 | 0.54 | 60 | 0.0785 |
| 0.0829 | 0.63 | 70 | 0.0737 |
| 0.0772 | 0.73 | 80 | 0.0689 |
| 0.0742 | 0.82 | 90 | 0.0664 |
| 0.0774 | 0.91 | 100 | 0.0658 |
| 0.0758 | 1.0 | 110 | 0.0652 |
| 0.0728 | 1.09 | 120 | 0.0640 |
| 0.0709 | 1.18 | 130 | 0.0633 |
| 0.068 | 1.27 | 140 | 0.0628 |
| 0.0669 | 1.36 | 150 | 0.0609 |
| 0.0684 | 1.45 | 160 | 0.0605 |
| 0.0675 | 1.54 | 170 | 0.0602 |
| 0.0668 | 1.63 | 180 | 0.0590 |
| 0.0635 | 1.72 | 190 | 0.0615 |
| 0.0671 | 1.81 | 200 | 0.0584 |
| 0.0578 | 1.9 | 210 | 0.0588 |
| 0.0585 | 1.99 | 220 | 0.0601 |
| 0.0538 | 2.08 | 230 | 0.0589 |
| 0.0586 | 2.18 | 240 | 0.0591 |
| 0.0552 | 2.27 | 250 | 0.0583 |
| 0.0563 | 2.36 | 260 | 0.0586 |
| 0.0545 | 2.45 | 270 | 0.0586 |
| 0.0514 | 2.54 | 280 | 0.0590 |
| 0.0525 | 2.63 | 290 | 0.0592 |
| 0.0549 | 2.72 | 300 | 0.0592 |
| 0.0605 | 2.81 | 310 | 0.0588 |
| 0.0515 | 2.9 | 320 | 0.0588 |
| 0.0564 | 2.99 | 330 | 0.0588 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0415B1", "results": []}]} | Litzy619/V0415B1 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-16T16:26:46+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
| V0415B1
=======
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0588
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 60
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers |
## 💫 Community Model> wavecoder-ultra-6.7b by Microsoft
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Microsoft](https://huggingface.co/microsoft)<br>
**Original model**: [wavecoder-ultra-6.7b](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2675](https://github.com/ggerganov/llama.cpp/releases/tag/b2675)<br>
## Model Summary:
WaveCoder ultra is a coding model created with 'Widepread And Versatile Enhanced' instruction tuning. It has exceptional generalization ability across different code-related tasks and has a high efficiency in generation.<br>
This model should be used exclusively for coding, and will follow instructions for code generation.
## Prompt Template:
Choose the `Alpaca` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
## Use case and examples
WaveCoder ultra is fine tuned for code-related instruction following tasks, including code generation, summarization, repair, and translation.
## Code Generation

## Code Summarization

## Code Repair

## Code Translation

## Technical Details
The WaveCoder series of models is the result of a 'Widespread And Versatile Enchanced' (WAVE) instruction tuning with a highly refined dataset.
Their 'CodeOcean' consists of 20,000 instruction instances across the 4 code-related tasks (generation, summarization, repair, translation) with instructions generated by GPT-3.5-turbo.
To create this dataset, the team used existing raw code from GitHub CodeSearchNet, filtering for quality and diversity, then used a 'novel LLM-based Generator-Discriminator Framework' which involves generating supervised instruction data from the unsupervised open source code.
For further details and benchmarks, check out their arXiv paper [here](https://arxiv.org/abs/2312.14187)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
| {"license": "other", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_name": "deepseek", "pipeline_tag": "text-generation", "quantized_by": "bartowski", "lm_studio": {"param_count": "6.7b", "use_case": "coding", "release_date": "15-04-2024", "model_creator": "microsoft", "prompt_template": "alpaca", "system_prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.", "base_model": "DeepseekCoder", "original_repo": "microsoft/wavecoder-ultra-6.7b"}} | lmstudio-community/wavecoder-ultra-6.7b-GGUF | null | [
"transformers",
"gguf",
"code",
"text-generation",
"dataset:humaneval",
"arxiv:2312.14187",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:26:54+00:00 | [
"2312.14187"
] | [] | TAGS
#transformers #gguf #code #text-generation #dataset-humaneval #arxiv-2312.14187 #license-other #endpoints_compatible #region-us
|
## Community Model> wavecoder-ultra-6.7b by Microsoft
* LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*.
Model creator: Microsoft<br>
Original model: wavecoder-ultra-6.7b<br>
GGUF quantization: provided by bartowski based on 'URL' release b2675<br>
## Model Summary:
WaveCoder ultra is a coding model created with 'Widepread And Versatile Enhanced' instruction tuning. It has exceptional generalization ability across different code-related tasks and has a high efficiency in generation.<br>
This model should be used exclusively for coding, and will follow instructions for code generation.
## Prompt Template:
Choose the 'Alpaca' preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
## Use case and examples
WaveCoder ultra is fine tuned for code-related instruction following tasks, including code generation, summarization, repair, and translation.
## Code Generation
!image/png
## Code Summarization
!image/png
## Code Repair
!image/png
## Code Translation
!image/png
## Technical Details
The WaveCoder series of models is the result of a 'Widespread And Versatile Enchanced' (WAVE) instruction tuning with a highly refined dataset.
Their 'CodeOcean' consists of 20,000 instruction instances across the 4 code-related tasks (generation, summarization, repair, translation) with instructions generated by GPT-3.5-turbo.
To create this dataset, the team used existing raw code from GitHub CodeSearchNet, filtering for quality and diversity, then used a 'novel LLM-based Generator-Discriminator Framework' which involves generating supervised instruction data from the unsupervised open source code.
For further details and benchmarks, check out their arXiv paper here
## Special thanks
Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.
Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
| [
"## Community Model> wavecoder-ultra-6.7b by Microsoft\n\n* LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*.\n\nModel creator: Microsoft<br>\nOriginal model: wavecoder-ultra-6.7b<br>\nGGUF quantization: provided by bartowski based on 'URL' release b2675<br>",
"## Model Summary:\n\nWaveCoder ultra is a coding model created with 'Widepread And Versatile Enhanced' instruction tuning. It has exceptional generalization ability across different code-related tasks and has a high efficiency in generation.<br>\nThis model should be used exclusively for coding, and will follow instructions for code generation.",
"## Prompt Template:\n\nChoose the 'Alpaca' preset in your LM Studio.\n\nUnder the hood, the model will see a prompt that's formatted like so:",
"## Use case and examples\n\nWaveCoder ultra is fine tuned for code-related instruction following tasks, including code generation, summarization, repair, and translation.",
"## Code Generation\n\n!image/png",
"## Code Summarization\n\n!image/png",
"## Code Repair\n\n!image/png",
"## Code Translation\n\n!image/png",
"## Technical Details\n\nThe WaveCoder series of models is the result of a 'Widespread And Versatile Enchanced' (WAVE) instruction tuning with a highly refined dataset.\n\nTheir 'CodeOcean' consists of 20,000 instruction instances across the 4 code-related tasks (generation, summarization, repair, translation) with instructions generated by GPT-3.5-turbo.\n\nTo create this dataset, the team used existing raw code from GitHub CodeSearchNet, filtering for quality and diversity, then used a 'novel LLM-based Generator-Discriminator Framework' which involves generating supervised instruction data from the unsupervised open source code.\n\nFor further details and benchmarks, check out their arXiv paper here",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.\n\n Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!",
"## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio."
] | [
"TAGS\n#transformers #gguf #code #text-generation #dataset-humaneval #arxiv-2312.14187 #license-other #endpoints_compatible #region-us \n",
"## Community Model> wavecoder-ultra-6.7b by Microsoft\n\n* LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord*.\n\nModel creator: Microsoft<br>\nOriginal model: wavecoder-ultra-6.7b<br>\nGGUF quantization: provided by bartowski based on 'URL' release b2675<br>",
"## Model Summary:\n\nWaveCoder ultra is a coding model created with 'Widepread And Versatile Enhanced' instruction tuning. It has exceptional generalization ability across different code-related tasks and has a high efficiency in generation.<br>\nThis model should be used exclusively for coding, and will follow instructions for code generation.",
"## Prompt Template:\n\nChoose the 'Alpaca' preset in your LM Studio.\n\nUnder the hood, the model will see a prompt that's formatted like so:",
"## Use case and examples\n\nWaveCoder ultra is fine tuned for code-related instruction following tasks, including code generation, summarization, repair, and translation.",
"## Code Generation\n\n!image/png",
"## Code Summarization\n\n!image/png",
"## Code Repair\n\n!image/png",
"## Code Translation\n\n!image/png",
"## Technical Details\n\nThe WaveCoder series of models is the result of a 'Widespread And Versatile Enchanced' (WAVE) instruction tuning with a highly refined dataset.\n\nTheir 'CodeOcean' consists of 20,000 instruction instances across the 4 code-related tasks (generation, summarization, repair, translation) with instructions generated by GPT-3.5-turbo.\n\nTo create this dataset, the team used existing raw code from GitHub CodeSearchNet, filtering for quality and diversity, then used a 'novel LLM-based Generator-Discriminator Framework' which involves generating supervised instruction data from the unsupervised open source code.\n\nFor further details and benchmarks, check out their arXiv paper here",
"## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.\n\n Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!",
"## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2", "model-index": [{"name": "outputs", "results": []}]} | fergos80/outputs | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T16:28:52+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 #license-apache-2.0 #region-us
|
# outputs
This model is a fine-tuned version of NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# outputs\n\nThis model is a fine-tuned version of NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 #license-apache-2.0 #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 50",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
Mistral-7b Finetuned for medical research paper summarization
| {"license": "mit"} | BiswajitPadhi99/mistral-7b-finetuned-medical-summarizer-old | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-16T16:29:25+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #4-bit #region-us
|
Mistral-7b Finetuned for medical research paper summarization
| [] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #4-bit #region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Gordon119/TAT_TD-openai-whisper-large-v2-mix-with-zh-TAT-epoch4-total5epoch | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:29:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# autoregressive_finetune_split_rate2e-05_epochs5
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 3.6586 |
| No log | 2.0 | 18 | 3.5103 |
| No log | 3.0 | 27 | 3.4323 |
| No log | 4.0 | 36 | 3.3901 |
| No log | 5.0 | 45 | 3.3769 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilgpt2", "model-index": [{"name": "autoregressive_finetune_split_rate2e-05_epochs5", "results": []}]} | katieguo/autoregressive_finetune_split_rate2e-05_epochs5 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:31:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilbert/distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| autoregressive\_finetune\_split\_rate2e-05\_epochs5
===================================================
This model is a fine-tuned version of distilbert/distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.3769
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilbert/distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
- Accuracy: 0.8333
- F1: 0.8836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5915 | 1.0 | 230 | 0.4721 | 0.7990 | 0.8664 |
| 0.4183 | 2.0 | 460 | 0.3872 | 0.8358 | 0.8835 |
| 0.2397 | 3.0 | 690 | 0.4882 | 0.8333 | 0.8836 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "bert-base-cased", "model-index": [{"name": "finetuned-bert-mrpc", "results": []}]} | jayspring/finetuned-bert-mrpc | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:31:29+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| finetuned-bert-mrpc
===================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4882
* Accuracy: 0.8333
* F1: 0.8836
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. | {"license": "apache-2.0", "tags": ["finetuned"], "pipeline_tag": "text-generation", "inference": true, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}, {"role": "assistant", "content": "lkadjfljsd"}]}]} | INoahGuy77/mistralclone | null | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:32:13+00:00 | [
"2310.06825"
] | [] | TAGS
#transformers #pytorch #safetensors #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our paper and release blog post.
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
This format is available as a chat template via the 'apply_chat_template()' method:
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
Installing transformers from source should solve the issue
pip install git+URL
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. | [
"# Model Card for Mistral-7B-Instruct-v0.1\n\nThe Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:",
"## Model Architecture\nThis instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer",
"## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.",
"## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.",
"## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed."
] | [
"TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Mistral-7B-Instruct-v0.1\n\nThe Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:",
"## Model Architecture\nThis instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer",
"## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.",
"## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.",
"## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed."
] |
null | mlx |
# mlx-community/CodeQwen1.5-7B-4bit
This model was converted to MLX format from [`Qwen/CodeQwen1.5-7B`]() using mlx-lm version **0.9.0**.
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
Refer to the [original model card](https://huggingface.co/Qwen/CodeQwen1.5-7B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeQwen1.5-7B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "apache-2.0", "tags": ["mlx"]} | mlx-community/CodeQwen1.5-7B-4bit | null | [
"mlx",
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T16:33:03+00:00 | [] | [] | TAGS
#mlx #safetensors #qwen2 #license-apache-2.0 #region-us
|
# mlx-community/CodeQwen1.5-7B-4bit
This model was converted to MLX format from ['Qwen/CodeQwen1.5-7B']() using mlx-lm version 0.9.0.
Model added by Prince Canuma.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/CodeQwen1.5-7B-4bit\nThis model was converted to MLX format from ['Qwen/CodeQwen1.5-7B']() using mlx-lm version 0.9.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #qwen2 #license-apache-2.0 #region-us \n",
"# mlx-community/CodeQwen1.5-7B-4bit\nThis model was converted to MLX format from ['Qwen/CodeQwen1.5-7B']() using mlx-lm version 0.9.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9887
- Accuracy: 0.7500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4531 | 0.28 | 5000 | 1.3950 | 0.6533 |
| 1.3205 | 0.57 | 10000 | 1.2791 | 0.6785 |
| 1.2523 | 0.85 | 15000 | 1.2087 | 0.6956 |
| 1.1858 | 1.14 | 20000 | 1.1690 | 0.7044 |
| 1.1584 | 1.42 | 25000 | 1.1461 | 0.7107 |
| 1.1443 | 1.71 | 30000 | 1.1134 | 0.7183 |
| 1.1234 | 1.99 | 35000 | 1.0900 | 0.7237 |
| 1.0848 | 2.28 | 40000 | 1.0722 | 0.7281 |
| 1.0775 | 2.56 | 45000 | 1.0635 | 0.7302 |
| 1.0602 | 2.84 | 50000 | 1.0437 | 0.7346 |
| 1.0157 | 3.13 | 55000 | 1.0347 | 0.7374 |
| 1.0114 | 3.41 | 60000 | 1.0268 | 0.7397 |
| 1.0019 | 3.7 | 65000 | 1.0118 | 0.7431 |
| 1.0009 | 3.98 | 70000 | 1.0030 | 0.7454 |
| 0.957 | 4.27 | 75000 | 0.9977 | 0.7472 |
| 0.9631 | 4.55 | 80000 | 0.9894 | 0.7491 |
| 0.9579 | 4.84 | 85000 | 0.9840 | 0.7506 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "other", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "apple/mobilevit-small", "model-index": [{"name": "results", "results": []}]} | JoshuaKelleyDs/doodle-MobileVIT-small-finetune | null | [
"transformers",
"onnx",
"safetensors",
"mobilevit",
"image-classification",
"generated_from_trainer",
"base_model:apple/mobilevit-small",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T16:34:31+00:00 | [] | [] | TAGS
#transformers #onnx #safetensors #mobilevit #image-classification #generated_from_trainer #base_model-apple/mobilevit-small #license-other #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of apple/mobilevit-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9887
* Accuracy: 0.7500
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0008
* train\_batch\_size: 256
* eval\_batch\_size: 256
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #onnx #safetensors #mobilevit #image-classification #generated_from_trainer #base_model-apple/mobilevit-small #license-other #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0008\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 256\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-dpo-qlora", "results": []}]} | hannahbillo/zephyr-7b-dpo-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T16:34:44+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# zephyr-7b-dpo-qlora\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# zephyr-7b-dpo-qlora\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-21-layer](https://huggingface.co/Citaman/command-r-21-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-21-layer
layer_range: [0, 20]
- model: Citaman/command-r-21-layer
layer_range: [1, 21]
merge_method: slerp
base_model: Citaman/command-r-21-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-21-layer"]} | Citaman/command-r-20-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-21-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T16:35:22+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-21-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-21-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-21-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-21-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-21-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.