pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Uploaded model
- **Developed by:** adrien-alloreview
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | adrien-alloreview/llama-3-STAN-alpha | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:46:58+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: adrien-alloreview
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: adrien-alloreview\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: adrien-alloreview\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-131f_PasswordMatch
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-131f_PasswordMatch", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-131f_PasswordMatch | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T10:47:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-31m_mz-131f_PasswordMatch
This model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-31m_mz-131f_PasswordMatch\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-31m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-31m_mz-131f_PasswordMatch\n\nThis model is a fine-tuned version of EleutherAI/pythia-31m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | diffusers | <div align="center">
<h1> <a>Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models</a></h1>
<p align="center">
<a href=https://paint3d.github.io/>Project Page</a> โข
<a href=https://arxiv.org/abs/2312.13913>Arxiv</a> โข
<a href=https://github.com/OpenTexture/Paint3D>GitHub</a>
</p>
</div>
<div align="center">
<video width="1280" height="720" controls>
<source src="https://github.com/OpenTexture/Paint3D/assets/18525299/9aef7eeb-a783-482c-87d5-78055da3bfc0" type="video/mp4">
</video>
</div>
## Introduction
Paint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.
<details open="open">
<summary><b>Technical details</b></summary>
We present Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.
<div align="center">
<img width="1194" alt="pipeline" src="./assets/pipeline.jpg">
</div>
</details>
## ๐ Citation
```bib
@misc{zeng2023paint3d,
title={Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models},
author={Xianfang Zeng and Xin Chen and Zhongqi Qi and Wen Liu and Zibo Zhao and Zhibin Wang and BIN FU and Yong Liu and Gang Yu},
year={2023},
eprint={2312.13913},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | {"license": "apache-2.0", "tags": ["texture-generation"]} | GeorgeQi/Paint3d_UVPos_Control | null | [
"diffusers",
"texture-generation",
"arxiv:2312.13913",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T10:47:17+00:00 | [
"2312.13913"
] | [] | TAGS
#diffusers #texture-generation #arxiv-2312.13913 #license-apache-2.0 #region-us
| <div align="center">
<h1> <a>Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models</a></h1>
<p align="center">
<a href=URL Page</a> โข
<a href=URL โข
<a href=URL
</p>
</div>
<div align="center">
<video width="1280" height="720" controls>
<source src="URL type="video/mp4">
</video>
</div>
## Introduction
Paint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.
<details open="open">
<summary><b>Technical details</b></summary>
We present Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.
<div align="center">
<img width="1194" alt="pipeline" src="./assets/URL">
</div>
</details>
## Citation
| [
"## Introduction\nPaint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.\n\n\n<details open=\"open\">\n <summary><b>Technical details</b></summary>\n\nWe present Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.\n<div align=\"center\">\n<img width=\"1194\" alt=\"pipeline\" src=\"./assets/URL\">\n</div>\n</details>",
"## Citation"
] | [
"TAGS\n#diffusers #texture-generation #arxiv-2312.13913 #license-apache-2.0 #region-us \n",
"## Introduction\nPaint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.\n\n\n<details open=\"open\">\n <summary><b>Technical details</b></summary>\n\nWe present Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.\n<div align=\"center\">\n<img width=\"1194\" alt=\"pipeline\" src=\"./assets/URL\">\n</div>\n</details>",
"## Citation"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# basakdemirok/bert-base-turkish-cased-off_detect_v0
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0401
- Validation Loss: 0.4939
- Train F1: 0.6946
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7936, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.3059 | 0.2634 | 0.6928 | 0 |
| 0.1913 | 0.3052 | 0.7012 | 1 |
| 0.0943 | 0.4022 | 0.6942 | 2 |
| 0.0401 | 0.4939 | 0.6946 | 3 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.13.1
- Datasets 2.4.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "dbmdz/bert-base-turkish-cased", "model-index": [{"name": "basakdemirok/bert-base-turkish-cased-off_detect_v0", "results": []}]} | basakdemirok/bert-base-turkish-cased-off_detect_v0 | null | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:48:03+00:00 | [] | [] | TAGS
#transformers #tf #tensorboard #bert #text-classification #generated_from_keras_callback #base_model-dbmdz/bert-base-turkish-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us
| basakdemirok/bert-base-turkish-cased-off\_detect\_v0
====================================================
This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0401
* Validation Loss: 0.4939
* Train F1: 0.6946
* Epoch: 3
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 7936, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.13.1
* Datasets 2.4.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 7936, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.13.1\n* Datasets 2.4.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #tensorboard #bert #text-classification #generated_from_keras_callback #base_model-dbmdz/bert-base-turkish-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 7936, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.13.1\n* Datasets 2.4.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** baconnier
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | baconnier/finance_orpo_llama3_Instruct_8B_r64_51K_Adapters | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:48:39+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: baconnier
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
sentence-similarity | sentence-transformers |
๋ณธ ๋ชจ๋ธ์ multi-task loss (MultipleNegativeLoss -> AnglELoss) ๋ก, KlueNLI ๋ฐ KlueSTS ๋ฐ์ดํฐ๋ก ํ์ต๋์์ต๋๋ค. ํ์ต ์ฝ๋๋ ๋ค์ [Github hyperlink](https://github.com/comchobo/SFT_sent_emb?tab=readme-ov-file)์์ ๋ณด์ค ์ ์์ต๋๋ค.
## Usage (Huggingface inference API)
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/sorryhyun/sentence-embedding-klue-large"
headers = {"Authorization": "your_HF_token"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": {
"source_sentence": "์ข์์, ์ถ์ฒ, ์๋ฆผ์ค์ ๊น์ง",
"sentences": [
"์ข์์ ๋๋ฌ์ฃผ์ธ์!!",
"์ข์์, ์ถ์ฒ ๋ฑ ์ ํฌ๋ฒ๋ค์ด ์ข์ํด์",
"์๋ฆผ์ค์ ์ ๋๋ฌ์ฃผ์๋ฉด ๊ฐ์ฌ๋๋ฆฌ๊ฒ ์ต๋๋ค."
]
},
})
if __name__ == '__main__':
print(output)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
device = torch.device('cuda')
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}').to(device)
tokenized_data = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
dataloader = DataLoader(tokenized_data, batch_size=batch_size, pin_memory=True)
all_outputs = torch.zeros((len(tokenized_data), self.hidden_size)).to(device)
start_idx = 0
# I used mean-pool method for sentence representation
with torch.no_grad():
for inputs in tqdm(dataloader):
inputs = {k: v.to(device) for k, v in inputs.items()}
representations, _ = self.model(**inputs, return_dict=False)
attention_mask = inputs["attention_mask"]
input_mask_expanded = (attention_mask.unsqueeze(-1).expand(representations.size()).to(representations.dtype))
summed = torch.sum(representations * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
end_idx = start_idx + representations.shape[0]
all_outputs[start_idx:end_idx] = (summed / sum_mask)
start_idx = end_idx
```
## Evaluation Results
| Organization | Backbone Model | KlueSTS average | KorSTS average |
| -------- | ------- | ------- | ------- |
| team-lucid | DeBERTa-base | 54.15 | 29.72 |
| monologg | Electra-base | 66.97 | 40.98 |
| LMkor | Electra-base | 70.98 | 43.09 |
| deliciouscat | DeBERTa-base | - | 67.65 |
| BM-K | Roberta-base | 82.93 | **85.77** |
| Klue | Roberta-large | **86.71** | 71.70 |
| Klue (Hyperparameter searched) | Roberta-large | 86.21 | 75.54 |
๊ธฐ์กด ํ๊ตญ์ด ๋ฌธ์ฅ ์๋ฒ ๋ฉ ๋ชจ๋ธ์ mnli, snli ๋ฑ ์์ด ๋ฐ์ดํฐ์
์ ๊ธฐ๊ณ๋ฒ์ญํ์ฌ ํ์ต๋ ์ ์ ์ฐธ๊ณ ์ผ์ Klue ๋ฐ์ดํฐ์
์ผ๋ก ๋์ ํ์ตํด ๋ณด์์ต๋๋ค.
๊ทธ ๊ฒฐ๊ณผ, Klue-Roberta-large ๋ชจ๋ธ ๊ธฐ๋ฐ์ผ๋ก ํ์ตํ์ ๊ฒฝ์ฐ KlueSTS ๋ฐ KorSTS ํ
์คํธ์
์ ๋ชจ๋์ ๋ํด ์ค์ํ ์ฑ๋ฅ์ ๋ณด์ฌ, ์ข ๋ elaborateํ representation์ ํ์ฑํ๋ ๊ฒ์ผ๋ก ์ฌ๋ฃํ์ต๋๋ค.
๋ค๋ง ํ๊ฐ ์์น๋ ํ์ดํผํ๋ผ๋ฏธํฐ ์ธํ
, ์๋ ๋๋ฒ ๋ฑ์ผ๋ก ํฌ๊ฒ ๋ฌ๋ผ์ง ์ ์์ผ๋ฏ๋ก ์ฐธ๊ณ ํ์๊ธธ ๋ฐ๋๋๋ค.
## Training
NegativeRank loss -> simcse loss ๋ก ํ์ตํ์ต๋๋ค.
| {"language": ["ko"], "license": "cc-by-sa-4.0", "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["klue"], "pipeline_tag": "sentence-similarity"} | sorryhyun/sentence-embedding-klue-large | null | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:48:53+00:00 | [] | [
"ko"
] | TAGS
#sentence-transformers #safetensors #roberta #feature-extraction #sentence-similarity #transformers #ko #dataset-klue #license-cc-by-sa-4.0 #endpoints_compatible #region-us
| ๋ณธ ๋ชจ๋ธ์ multi-task loss (MultipleNegativeLoss -> AnglELoss) ๋ก, KlueNLI ๋ฐ KlueSTS ๋ฐ์ดํฐ๋ก ํ์ต๋์์ต๋๋ค. ํ์ต ์ฝ๋๋ ๋ค์ Github hyperlink์์ ๋ณด์ค ์ ์์ต๋๋ค.
Usage (Huggingface inference API)
---------------------------------
Usage (HuggingFace Transformers)
--------------------------------
Evaluation Results
------------------
๊ธฐ์กด ํ๊ตญ์ด ๋ฌธ์ฅ ์๋ฒ ๋ฉ ๋ชจ๋ธ์ mnli, snli ๋ฑ ์์ด ๋ฐ์ดํฐ์
์ ๊ธฐ๊ณ๋ฒ์ญํ์ฌ ํ์ต๋ ์ ์ ์ฐธ๊ณ ์ผ์ Klue ๋ฐ์ดํฐ์
์ผ๋ก ๋์ ํ์ตํด ๋ณด์์ต๋๋ค.
๊ทธ ๊ฒฐ๊ณผ, Klue-Roberta-large ๋ชจ๋ธ ๊ธฐ๋ฐ์ผ๋ก ํ์ตํ์ ๊ฒฝ์ฐ KlueSTS ๋ฐ KorSTS ํ
์คํธ์
์ ๋ชจ๋์ ๋ํด ์ค์ํ ์ฑ๋ฅ์ ๋ณด์ฌ, ์ข ๋ elaborateํ representation์ ํ์ฑํ๋ ๊ฒ์ผ๋ก ์ฌ๋ฃํ์ต๋๋ค.
๋ค๋ง ํ๊ฐ ์์น๋ ํ์ดํผํ๋ผ๋ฏธํฐ ์ธํ
, ์๋ ๋๋ฒ ๋ฑ์ผ๋ก ํฌ๊ฒ ๋ฌ๋ผ์ง ์ ์์ผ๋ฏ๋ก ์ฐธ๊ณ ํ์๊ธธ ๋ฐ๋๋๋ค.
Training
--------
NegativeRank loss -> simcse loss ๋ก ํ์ตํ์ต๋๋ค.
| [] | [
"TAGS\n#sentence-transformers #safetensors #roberta #feature-extraction #sentence-similarity #transformers #ko #dataset-klue #license-cc-by-sa-4.0 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Pongsasit Thongpramoon
- **Model type:** Cross Encoder
- **Language(s) (NLP):** Thai
-
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder("Pongsasit/mod-th-cross-encoder")
scores = model.predict([["เธญเธฒเธซเธฒเธฃเธเธฒเธกเธชเธฑเนเธ", "เธซเธกเธน เนเธซเนเธ เนเธเนเธ เนเธเน"], ["เธญเธฒเธซเธฒเธฃเธเธฒเธกเธชเธฑเนเธ", "เธฃเธ เนเธฃเธทเธญ เนเธเธฃเธทเนเธญเธเธเธดเธ เธเธฑเธเธฃเธขเธฒเธ"]])
``` | {"library_name": "transformers", "tags": []} | Pongsasit/mod-th-cross-encoder | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:51:32+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Pongsasit Thongpramoon
- Model type: Cross Encoder
- Language(s) (NLP): Thai
-
## How to Get Started with the Model
Use the code below to get started with the model.
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Pongsasit Thongpramoon\n- Model type: Cross Encoder\n- Language(s) (NLP): Thai\n-",
"## How to Get Started with the Model\n\nUse the code below to get started with the model."
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Pongsasit Thongpramoon\n- Model type: Cross Encoder\n- Language(s) (NLP): Thai\n-",
"## How to Get Started with the Model\n\nUse the code below to get started with the model."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | happylayers/sc34 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:52:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
# Crybaby
Samples and prompts:

Top left: pretty cute little girl as Marie Antoinette playing on toy piano in bedroom
Top right: Masterpiece, Best Quality, highres, fantasy, official art, kitten, grass, sky, scenery, Fuji 85mm, fairytale illustration, colored sclera, black eyes, perfect eyes, happy, cute, cat, whiskers, pawpads, claws, furry, plush, soft, perfect, tail, christmas lights, christmas tree, christmas ornaments, warmth
Bottom left: analog style 70s color photograph of young Jet Lee as Invincible Man, star wars behind the scenes
Bottom right: absurdres, adorable cute harley quinn, at night, dark alley, moon, :) red ponytail, blonde ponytail, in matte black hardsuit, military, roughed up, bat, city fog,
A mix of MGM and CocaCola (which includes many models) to create a realistic version of Cryptids.
Original pages:
https://civitai.com/models/109568/mgmv1
https://huggingface.co/Yntec/Cryptids
https://huggingface.co/Yntec/CocaCola
https://civitai.com/models/142552?modelVersionId=163068 (Kitsch-In-Sync v2)
https://civitai.com/models/21493/hellmix?modelVersionId=25632 | {"language": ["en"], "license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Paintings", "Style Art", "Landscapes", "Wick_J4", "iamxenos", "RIXYN", "Barons", "stable-diffusion", "stable-diffusion-diffusers", "diffusers", "text-to-image"], "pipeline_tag": "text-to-image"} | Yntec/Crybaby | null | [
"diffusers",
"safetensors",
"Paintings",
"Style Art",
"Landscapes",
"Wick_J4",
"iamxenos",
"RIXYN",
"Barons",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"has_space",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-26T10:52:19+00:00 | [] | [
"en"
] | TAGS
#diffusers #safetensors #Paintings #Style Art #Landscapes #Wick_J4 #iamxenos #RIXYN #Barons #stable-diffusion #stable-diffusion-diffusers #text-to-image #en #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us
|
# Crybaby
Samples and prompts:
!AI image generator Crybaby samples
Top left: pretty cute little girl as Marie Antoinette playing on toy piano in bedroom
Top right: Masterpiece, Best Quality, highres, fantasy, official art, kitten, grass, sky, scenery, Fuji 85mm, fairytale illustration, colored sclera, black eyes, perfect eyes, happy, cute, cat, whiskers, pawpads, claws, furry, plush, soft, perfect, tail, christmas lights, christmas tree, christmas ornaments, warmth
Bottom left: analog style 70s color photograph of young Jet Lee as Invincible Man, star wars behind the scenes
Bottom right: absurdres, adorable cute harley quinn, at night, dark alley, moon, :) red ponytail, blonde ponytail, in matte black hardsuit, military, roughed up, bat, city fog,
A mix of MGM and CocaCola (which includes many models) to create a realistic version of Cryptids.
Original pages:
URL
URL
URL
URL (Kitsch-In-Sync v2)
URL | [
"# Crybaby\n\nSamples and prompts:\n\n!AI image generator Crybaby samples\n\nTop left: pretty cute little girl as Marie Antoinette playing on toy piano in bedroom\n\nTop right: Masterpiece, Best Quality, highres, fantasy, official art, kitten, grass, sky, scenery, Fuji 85mm, fairytale illustration, colored sclera, black eyes, perfect eyes, happy, cute, cat, whiskers, pawpads, claws, furry, plush, soft, perfect, tail, christmas lights, christmas tree, christmas ornaments, warmth\n\nBottom left: analog style 70s color photograph of young Jet Lee as Invincible Man, star wars behind the scenes\n\nBottom right: absurdres, adorable cute harley quinn, at night, dark alley, moon, :) red ponytail, blonde ponytail, in matte black hardsuit, military, roughed up, bat, city fog,\n\nA mix of MGM and CocaCola (which includes many models) to create a realistic version of Cryptids.\n\nOriginal pages:\n\nURL\n\nURL\n\nURL\n\nURL (Kitsch-In-Sync v2)\n\nURL"
] | [
"TAGS\n#diffusers #safetensors #Paintings #Style Art #Landscapes #Wick_J4 #iamxenos #RIXYN #Barons #stable-diffusion #stable-diffusion-diffusers #text-to-image #en #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us \n",
"# Crybaby\n\nSamples and prompts:\n\n!AI image generator Crybaby samples\n\nTop left: pretty cute little girl as Marie Antoinette playing on toy piano in bedroom\n\nTop right: Masterpiece, Best Quality, highres, fantasy, official art, kitten, grass, sky, scenery, Fuji 85mm, fairytale illustration, colored sclera, black eyes, perfect eyes, happy, cute, cat, whiskers, pawpads, claws, furry, plush, soft, perfect, tail, christmas lights, christmas tree, christmas ornaments, warmth\n\nBottom left: analog style 70s color photograph of young Jet Lee as Invincible Man, star wars behind the scenes\n\nBottom right: absurdres, adorable cute harley quinn, at night, dark alley, moon, :) red ponytail, blonde ponytail, in matte black hardsuit, military, roughed up, bat, city fog,\n\nA mix of MGM and CocaCola (which includes many models) to create a realistic version of Cryptids.\n\nOriginal pages:\n\nURL\n\nURL\n\nURL\n\nURL (Kitsch-In-Sync v2)\n\nURL"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-waste
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the BioNonbioWaste dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0048
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1044 | 0.5435 | 100 | 0.0418 | 0.9826 |
| 0.0517 | 1.0870 | 200 | 0.0545 | 0.9749 |
| 0.0168 | 1.6304 | 300 | 0.0099 | 0.9961 |
| 0.0526 | 2.1739 | 400 | 0.0048 | 1.0 |
| 0.062 | 2.7174 | 500 | 0.0196 | 0.9942 |
| 0.0088 | 3.2609 | 600 | 0.0155 | 0.9981 |
| 0.0239 | 3.8043 | 700 | 0.0106 | 0.9981 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "finetuned-waste", "results": []}]} | Shamsaa/finetuned-waste | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:52:28+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| finetuned-waste
===============
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the BioNonbioWaste dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0048
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
feature-extraction | transformers |
This is the converted model from Unbabel/wmt23-cometkiwi-da
1) Just kept the weights/bias keys()
2) Renamed the keys to match the original Facebook/XLM-roberta-XL
3) kept the layer_wise_attention / estimator layers
Because of a hack in HF's code I had to rename the "layerwise_attention.gamma" key to "layerwise_attention.gam"
I changed the config.json key "layer_transformation" from sparsemax to softmax because there is a bug in COMET since the flag is not passed, the actual function used is the default which is softmax.
Usage:
```
from transformers import XLMRobertaTokenizer, XLMRobertaTokenizerFast, AutoModel
tokenizer = XLMRobertaTokenizerFast.from_pretrained("vince62s/wmt23-cometkiwi-da-roberta-xl", trust_remote_code=True)
model = AutoModel.from_pretrained("vince62s/wmt23-cometkiwi-da-roberta-xl", trust_remote_code=True)
text = "Hello world!</s></s>Bonjour le monde"
encoded_text = tokenizer(text, return_tensors='pt')
print(encoded_text)
output = model(**encoded_text)
print(output[0])
{'input_ids': tensor([[ 0, 35378, 8999, 38, 2, 2, 84602, 95, 11146, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
tensor([[0.8217]], grad_fn=<AddmmBackward0>)
```
Let's double check with the original code from Unbabel Comet:
```
from comet import download_model, load_from_checkpoint
model = load_from_checkpoint("/home/vincent/Downloads/cometkiwi23/checkpoints/model.ckpt") # this is the Unbabel checkpoint
data = [{"mt": "Hello world!", "src": "Bonjour le monde"}]
output = model.predict(data, gpus=0)
print(output)
Prediction([('scores', [0.8216837048530579]), ('system_score', 0.8216837048530579)])
```
---
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
pipeline_tag: translation
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: cc-by-nc-sa-4.0
library_name: transformers
---
This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
# Paper
[CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022)
# License:
cc-by-nc-sa-4.0
# Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install "unbabel-comet>=2.0.0"
```
Make sure you acknowledge its License and Log in into Hugging face hub before using:
```bash
huggingface-cli login
# or using an environment variable
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
Then you can use it through comet CLI:
```bash
comet-score -s {source-input}.txt -t {translation-output}.txt --model Unbabel/wmt22-cometkiwi-da
```
Or using Python:
```python
from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-cometkiwi-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "The output signal provides constant sync so the display never glitches.",
"mt": "Das Ausgangssignal bietet eine konstante Synchronisation, so dass die Anzeige nie stรถrt."
},
{
"src": "Krouลพek ilustrace je urฤen vลกem milovnรญkลฏm umฤnรญ ve vฤku od 10 do 15 let.",
"mt": "ะัะปััะต ัะปััััะฐััั ะฟัะธะทะฝะฐัะตะฝะต ะดะปั ะฒััั
ะปัะฑะธัะตะปัะฒ ะผะธััะตััะฒะฐ ั ะฒััั ะฒัะด 10 ะดะพ 15 ัะพะบัะฒ."
},
{
"src": "Mandela then became South Africa's first black president after his African National Congress party won the 1994 election.",
"mt": "ใใฎๅพใ1994ๅนดใฎ้ธๆใงใขใใชใซๅฝๆฐไผ่ญฐๆดพใๅๅฉใใๅใขใใชใซๅใฎ้ปไบบๅคง็ตฑ้ ใจใชใฃใใ"
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
# Intended uses
Our model is intented to be used for **reference-free MT evaluation**.
Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of InfoXLM which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
| {} | vince62s/wmt23-cometkiwi-da-roberta-xl | null | [
"transformers",
"pytorch",
"xlm-roberta-xl",
"feature-extraction",
"custom_code",
"region:us"
] | null | 2024-04-26T10:52:57+00:00 | [] | [] | TAGS
#transformers #pytorch #xlm-roberta-xl #feature-extraction #custom_code #region-us
|
This is the converted model from Unbabel/wmt23-cometkiwi-da
1) Just kept the weights/bias keys()
2) Renamed the keys to match the original Facebook/XLM-roberta-XL
3) kept the layer_wise_attention / estimator layers
Because of a hack in HF's code I had to rename the "layerwise_attention.gamma" key to "layerwise_attention.gam"
I changed the URL key "layer_transformation" from sparsemax to softmax because there is a bug in COMET since the flag is not passed, the actual function used is the default which is softmax.
Usage:
Let's double check with the original code from Unbabel Comet:
---
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
pipeline_tag: translation
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: cc-by-nc-sa-4.0
library_name: transformers
---
This is a COMET quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
# Paper
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task (Rei et al., WMT 2022)
# License:
cc-by-nc-sa-4.0
# Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
Make sure you acknowledge its License and Log in into Hugging face hub before using:
Then you can use it through comet CLI:
Or using Python:
# Intended uses
Our model is intented to be used for reference-free MT evaluation.
Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of InfoXLM which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
| [
"# Paper\n\nCometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task (Rei et al., WMT 2022)",
"# License:\n\ncc-by-nc-sa-4.0",
"# Usage (unbabel-comet)\n\nUsing this model requires unbabel-comet to be installed:\n\n\n\nMake sure you acknowledge its License and Log in into Hugging face hub before using:\n\n\n\nThen you can use it through comet CLI:\n\n\n\nOr using Python:",
"# Intended uses\n\nOur model is intented to be used for reference-free MT evaluation. \n\nGiven a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.",
"# Languages Covered:\n\nThis model builds on top of InfoXLM which cover the following languages:\n\nAfrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.\n\nThus, results for language pairs containing uncovered languages are unreliable!"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta-xl #feature-extraction #custom_code #region-us \n",
"# Paper\n\nCometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task (Rei et al., WMT 2022)",
"# License:\n\ncc-by-nc-sa-4.0",
"# Usage (unbabel-comet)\n\nUsing this model requires unbabel-comet to be installed:\n\n\n\nMake sure you acknowledge its License and Log in into Hugging face hub before using:\n\n\n\nThen you can use it through comet CLI:\n\n\n\nOr using Python:",
"# Intended uses\n\nOur model is intented to be used for reference-free MT evaluation. \n\nGiven a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.",
"# Languages Covered:\n\nThis model builds on top of InfoXLM which cover the following languages:\n\nAfrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.\n\nThus, results for language pairs containing uncovered languages are unreliable!"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "pipeline_tag": "text2text-generation"} | omertafveez/Llama-3-TherapyChatBot | null | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T10:53:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #feature-extraction #text2text-generation #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #feature-extraction #text2text-generation #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "results", "results": []}]} | tariq9mehmood9/results | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T10:54:56+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
|
# results
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# results\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n",
"# results\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | null |
# nchen909/Apollo-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`FreedomIntelligence/Apollo-7B`](https://huggingface.co/FreedomIntelligence/Apollo-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FreedomIntelligence/Apollo-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo nchen909/Apollo-7B-Q4_K_M-GGUF --model apollo-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo nchen909/Apollo-7B-Q4_K_M-GGUF --model apollo-7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m apollo-7b.Q4_K_M.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]} | nchen909/Apollo-7B-Q4_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T10:56:13+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# nchen909/Apollo-7B-Q4_K_M-GGUF
This model was converted to GGUF format from 'FreedomIntelligence/Apollo-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# nchen909/Apollo-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'FreedomIntelligence/Apollo-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# nchen909/Apollo-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'FreedomIntelligence/Apollo-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Nilesh360/llama-vid-7b-full-224-video-fps-1 | null | [
"transformers",
"safetensors",
"llamavid",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T10:56:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llamavid #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llamavid #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shipping_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 142 | 0.5914 |
| No log | 2.0 | 284 | 0.5791 |
| No log | 3.0 | 426 | 0.5682 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "shipping_qa_model", "results": []}]} | SurajSphinx/shipping_qa_model | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:01:15+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| shipping\_qa\_model
===================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5682
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.2+cu118
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.2+cu118\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-xray-pneumonia-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1489
- Accuracy: 0.9502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2873 | 0.9961 | 127 | 0.1489 | 0.9502 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-xray-pneumonia-classification", "results": []}]} | cchoo1/vit-xray-pneumonia-classification | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:04:08+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| vit-xray-pneumonia-classification
=================================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1489
* Accuracy: 0.9502
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/chujiezheng/Starling-LM-7B-alpha-ExPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Starling-LM-7B-alpha-ExPO-GGUF/resolve/main/Starling-LM-7B-alpha-ExPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "chujiezheng/Starling-LM-7B-alpha-ExPO", "quantized_by": "mradermacher"} | mradermacher/Starling-LM-7B-alpha-ExPO-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:chujiezheng/Starling-LM-7B-alpha-ExPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:04:31+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-chujiezheng/Starling-LM-7B-alpha-ExPO #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-chujiezheng/Starling-LM-7B-alpha-ExPO #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-model-steam-game-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4438
- Accuracy: 0.9181
- F1: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "finetuning-distilbert-model-steam-game-reviews", "results": []}]} | zitroeth/finetuning-distilbert-model-steam-game-reviews | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:05:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# finetuning-distilbert-model-steam-game-reviews
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4438
- Accuracy: 0.9181
- F1: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# finetuning-distilbert-model-steam-game-reviews\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4438\n- Accuracy: 0.9181\n- F1: 0.9451",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# finetuning-distilbert-model-steam-game-reviews\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4438\n- Accuracy: 0.9181\n- F1: 0.9451",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/mlx-community/Llama-3-8B-Instruct-262k-unquantized
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF/resolve/main/Llama-3-8B-Instruct-262k-unquantized.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["meta", "llama-3", "mlx"], "base_model": "mlx-community/Llama-3-8B-Instruct-262k-unquantized", "quantized_by": "mradermacher"} | mradermacher/Llama-3-8B-Instruct-262k-unquantized-i1-GGUF | null | [
"transformers",
"gguf",
"meta",
"llama-3",
"mlx",
"en",
"base_model:mlx-community/Llama-3-8B-Instruct-262k-unquantized",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:05:51+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #meta #llama-3 #mlx #en #base_model-mlx-community/Llama-3-8B-Instruct-262k-unquantized #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #meta #llama-3 #mlx #en #base_model-mlx-community/Llama-3-8B-Instruct-262k-unquantized #endpoints_compatible #region-us \n"
] |
object-detection | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Spatiallysaying/detr-finetuned-rwymarkings-horizontal-v1 | null | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:05:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128
This model is a fine-tuned version of [kavg/TrOCR-SIN-DeiT](https://huggingface.co/kavg/TrOCR-SIN-DeiT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7352
- Cer: 0.5340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss |
|:-------------:|:-----:|:----:|:------:|:---------------:|
| 0.9936 | 1.75 | 100 | 0.6193 | 1.6907 |
| 0.0819 | 3.51 | 200 | 0.6011 | 1.8343 |
| 0.1437 | 5.26 | 300 | 0.6579 | 2.1956 |
| 0.0857 | 7.02 | 400 | 0.6435 | 2.6580 |
| 0.0531 | 8.77 | 500 | 0.5595 | 1.9046 |
| 0.1282 | 10.53 | 600 | 0.6121 | 2.1264 |
| 0.0247 | 12.28 | 700 | 0.6218 | 2.5938 |
| 0.0071 | 14.04 | 800 | 0.6402 | 2.2984 |
| 0.0235 | 15.79 | 900 | 0.5961 | 2.3736 |
| 0.152 | 17.54 | 1000 | 0.5674 | 2.0205 |
| 0.0521 | 19.3 | 1100 | 0.5802 | 2.5917 |
| 0.0047 | 21.05 | 1200 | 0.6116 | 2.6910 |
| 0.065 | 22.81 | 1300 | 0.5757 | 2.2894 |
| 0.0313 | 24.56 | 1400 | 0.5647 | 2.6897 |
| 0.0586 | 26.32 | 1500 | 0.5398 | 2.0499 |
| 0.0015 | 28.07 | 1600 | 0.5505 | 2.3662 |
| 0.0125 | 29.82 | 1700 | 0.6250 | 2.1673 |
| 0.0207 | 31.58 | 1800 | 0.5674 | 2.0626 |
| 0.0015 | 33.33 | 1900 | 0.6260 | 2.9868 |
| 0.0004 | 35.09 | 2000 | 0.5792 | 2.5184 |
| 0.001 | 36.84 | 2100 | 0.5557 | 2.8804 |
| 0.0134 | 38.6 | 2200 | 0.6166 | 2.7627 |
| 0.0017 | 40.35 | 2300 | 0.5477 | 2.2333 |
| 0.0046 | 42.11 | 2400 | 0.5871 | 3.2010 |
| 0.0003 | 43.86 | 2500 | 0.5485 | 2.7037 |
| 0.0007 | 45.61 | 2600 | 0.5340 | 2.7352 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| {"tags": ["generated_from_trainer"], "base_model": "kavg/TrOCR-SIN-DeiT", "model-index": [{"name": "TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128", "results": []}]} | kavg/TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:kavg/TrOCR-SIN-DeiT",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:07:34+00:00 | [] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-kavg/TrOCR-SIN-DeiT #endpoints_compatible #region-us
| TrOCR-SIN-DeiT-Handwritten-Beam10-maxseq128
===========================================
This model is a fine-tuned version of kavg/TrOCR-SIN-DeiT on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7352
* Cer: 0.5340
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 2600
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2600\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #generated_from_trainer #base_model-kavg/TrOCR-SIN-DeiT #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2600\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q2_K.gguf.part2of2) | Q2_K | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1", "quantized_by": "mradermacher"} | mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF | null | [
"transformers",
"mergekit",
"merge",
"en",
"base_model:tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:08:11+00:00 | [] | [
"en"
] | TAGS
#transformers #mergekit #merge #en #base_model-tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #mergekit #merge #en #base_model-tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/final1 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:09:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | blockblockblock/miqu-evil-dpo-bpw2.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:09:33+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# miqu-evil-dpo
# Model Details
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.
!image/png
## Prompt template: Mistral Inst
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| [
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | jotaefecueme/survey-input | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:09:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** reallad
- **License:** apache-2.0
- **Finetuned from model :** reallad/yi-6b-chat-translate2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "reallad/yi-6b-chat-translate2"} | reallad/yi-6b-chat-translate3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:reallad/yi-6b-chat-translate2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:10:37+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-reallad/yi-6b-chat-translate2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: reallad
- License: apache-2.0
- Finetuned from model : reallad/yi-6b-chat-translate2
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : reallad/yi-6b-chat-translate2\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-reallad/yi-6b-chat-translate2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : reallad/yi-6b-chat-translate2\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers.js | ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-t`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-t';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [411.10, 63.87, 647.68, 505.40] with score 0.986\n - nose: (526.09, 119.83) with score 0.874\n - left_eye: (539.01, 110.39) with score 0.696\n - right_eye: (512.50, 111.08) with score 0.662\n - left_shoulder: (563.59, 171.10) with score 0.999\n - right_shoulder: (467.38, 160.82) with score 0.999\n - left_elbow: (572.72, 240.61) with score 0.999\n - right_elbow: (437.86, 218.20) with score 0.998\n - left_wrist: (603.74, 303.53) with score 0.995\n - right_wrist: (506.01, 218.68) with score 0.992\n - left_hip: (536.00, 306.25) with score 1.000\n - right_hip: (472.79, 311.69) with score 0.999\n - left_knee: (580.82, 366.38) with score 0.996\n - right_knee: (500.25, 449.72) with score 0.954\n - left_ankle: (572.21, 449.52) with score 0.993\n - right_ankle: (541.37, 436.71) with score 0.916\nFound person at [93.58, 19.64, 492.62, 522.45] with score 0.909\n - left_shoulder: (233.76, 109.57) with score 0.971\n - right_shoulder: (229.56, 100.34) with score 0.950\n - left_elbow: (317.31, 162.73) with score 0.950\n - right_elbow: (229.98, 179.31) with score 0.934\n - left_wrist: (385.59, 219.03) with score 0.870\n - right_wrist: (161.31, 230.74) with score 0.952\n - left_hip: (351.23, 243.42) with score 0.998\n - right_hip: (361.94, 240.70) with score 0.999\n - left_knee: (297.77, 382.00) with score 0.998\n - right_knee: (306.07, 393.59) with score 1.000\n - left_ankle: (413.48, 354.16) with score 1.000\n - right_ankle: (445.30, 488.11) with score 0.999\nFound person at [-1.46, 50.68, 160.66, 371.74] with score 0.780\n - nose: (80.17, 81.16) with score 0.570\n - left_eye: (85.17, 75.45) with score 0.383\n - right_eye: (70.20, 77.09) with score 0.382\n - left_shoulder: (121.30, 114.98) with score 0.981\n - right_shoulder: (46.56, 114.41) with score 0.981\n - left_elbow: (144.09, 163.76) with score 0.777\n - right_elbow: (29.69, 159.24) with score 0.886\n - left_wrist: (142.31, 205.64) with score 0.725\n - right_wrist: (6.24, 199.62) with score 0.876\n - left_hip: (108.07, 208.90) with score 0.992\n - right_hip: (64.72, 212.01) with score 0.996\n - left_knee: (115.26, 276.52) with score 0.998\n - right_knee: (65.09, 283.25) with score 0.998\n - left_ankle: (126.09, 340.42) with score 0.991\n - right_ankle: (63.88, 348.88) with score 0.977\nFound person at [526.35, 36.25, 650.42, 280.90] with score 0.328\n - nose: (554.06, 71.87) with score 0.901\n - left_eye: (562.10, 66.30) with score 0.928\n - right_eye: (546.65, 66.36) with score 0.746\n - left_ear: (575.98, 68.17) with score 0.658\n - left_shoulder: (588.04, 102.61) with score 0.999\n - right_shoulder: (526.00, 102.94) with score 0.704\n - left_elbow: (618.11, 149.18) with score 0.984\n - left_wrist: (630.77, 189.42) with score 0.961\n - left_hip: (578.74, 181.42) with score 0.966\n - right_hip: (530.33, 176.46) with score 0.698\n - left_knee: (568.74, 233.01) with score 0.958\n - right_knee: (542.44, 243.87) with score 0.687\n - left_ankle: (585.17, 284.79) with score 0.838\n - right_ankle: (550.07, 292.19) with score 0.435\n```\n\n</details>" | {"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/RTMO-t | null | [
"transformers.js",
"onnx",
"rtmo",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:12:42+00:00 | [] | [] | TAGS
#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us
|
URL with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/RTMO-t'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-t'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-t'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] |
null | transformers.js | ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-s`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-s';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [423.33, 55.52, 644.28, 504.13] with score 0.988\n - nose: (527.30, 117.12) with score 0.733\n - left_eye: (541.79, 109.26) with score 0.554\n - right_eye: (515.04, 107.59) with score 0.475\n - left_shoulder: (563.30, 171.75) with score 1.000\n - right_shoulder: (464.21, 159.75) with score 1.000\n - left_elbow: (575.71, 238.04) with score 0.998\n - right_elbow: (436.06, 218.10) with score 0.999\n - left_wrist: (605.86, 303.35) with score 1.000\n - right_wrist: (497.47, 220.82) with score 1.000\n - left_hip: (540.97, 307.31) with score 1.000\n - right_hip: (475.85, 318.78) with score 1.000\n - left_knee: (578.63, 368.63) with score 1.000\n - right_knee: (501.05, 442.49) with score 1.000\n - left_ankle: (572.11, 464.96) with score 0.991\n - right_ankle: (535.75, 441.52) with score 0.981\nFound person at [89.97, 3.96, 517.81, 507.28] with score 0.966\n - left_shoulder: (242.65, 111.06) with score 0.999\n - right_shoulder: (228.79, 112.54) with score 0.999\n - left_elbow: (321.84, 169.07) with score 0.999\n - right_elbow: (225.76, 218.20) with score 1.000\n - left_wrist: (351.73, 220.74) with score 0.999\n - right_wrist: (160.19, 228.03) with score 1.000\n - left_hip: (342.34, 246.81) with score 1.000\n - right_hip: (360.05, 259.35) with score 0.999\n - left_knee: (299.56, 377.97) with score 0.998\n - right_knee: (313.81, 378.83) with score 0.976\n - left_ankle: (443.84, 312.35) with score 0.983\n - right_ankle: (424.74, 433.61) with score 0.823\nFound person at [-0.53, 51.78, 153.65, 371.17] with score 0.769\n - nose: (75.52, 85.67) with score 0.363\n - left_shoulder: (121.54, 113.17) with score 1.000\n - right_shoulder: (49.77, 117.60) with score 1.000\n - left_elbow: (132.90, 147.02) with score 0.932\n - right_elbow: (30.31, 156.42) with score 0.992\n - left_wrist: (154.43, 162.08) with score 0.871\n - right_wrist: (17.20, 196.43) with score 0.943\n - left_hip: (105.61, 204.27) with score 0.999\n - right_hip: (61.99, 203.66) with score 0.999\n - left_knee: (114.70, 270.91) with score 1.000\n - right_knee: (63.75, 275.33) with score 1.000\n - left_ankle: (125.53, 342.00) with score 0.998\n - right_ankle: (63.16, 344.07) with score 0.997\nFound person at [519.40, 34.94, 650.11, 312.07] with score 0.488\n - nose: (554.82, 76.58) with score 0.920\n - left_eye: (563.12, 69.41) with score 0.666\n - right_eye: (544.82, 70.01) with score 0.595\n - left_shoulder: (596.60, 105.61) with score 0.999\n - right_shoulder: (523.29, 107.31) with score 0.969\n - left_elbow: (625.14, 151.30) with score 0.999\n - right_elbow: (515.96, 147.59) with score 0.322\n - left_wrist: (630.90, 196.91) with score 0.998\n - right_wrist: (520.75, 181.83) with score 0.415\n - left_hip: (583.24, 200.84) with score 0.998\n - right_hip: (533.69, 200.01) with score 0.978\n - left_knee: (583.79, 265.14) with score 0.934\n - right_knee: (538.27, 262.98) with score 0.669\n - left_ankle: (584.90, 309.76) with score 0.489\n```\n\n</details>" | {"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/RTMO-s | null | [
"transformers.js",
"onnx",
"rtmo",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:12:44+00:00 | [] | [] | TAGS
#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us
|
URL with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/RTMO-s'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-s'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-s'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] |
null | transformers.js | ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-m`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-m';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.4;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [394.23, 54.52, 676.59, 509.93] with score 0.977\n - nose: (521.88, 120.59) with score 0.692\n - left_eye: (536.24, 109.29) with score 0.635\n - right_eye: (511.85, 107.62) with score 0.651\n - left_shoulder: (561.11, 171.55) with score 0.993\n - right_shoulder: (471.06, 157.17) with score 0.999\n - left_elbow: (574.33, 240.08) with score 0.993\n - right_elbow: (437.67, 219.04) with score 0.998\n - left_wrist: (605.09, 310.85) with score 0.996\n - right_wrist: (496.67, 218.61) with score 0.993\n - left_hip: (537.65, 305.16) with score 1.000\n - right_hip: (475.64, 313.71) with score 1.000\n - left_knee: (581.28, 366.44) with score 1.000\n - right_knee: (506.58, 432.27) with score 0.996\n - left_ankle: (575.49, 470.17) with score 0.999\n - right_ankle: (534.34, 442.35) with score 0.994\nFound person at [65.64, -3.94, 526.84, 538.72] with score 0.947\n - left_shoulder: (224.52, 111.13) with score 0.996\n - right_shoulder: (212.09, 110.60) with score 0.998\n - left_elbow: (322.33, 170.98) with score 0.998\n - right_elbow: (235.17, 223.79) with score 1.000\n - left_wrist: (389.08, 222.90) with score 0.997\n - right_wrist: (162.75, 228.10) with score 0.998\n - left_hip: (365.58, 242.19) with score 1.000\n - right_hip: (327.40, 255.20) with score 1.000\n - left_knee: (313.14, 376.06) with score 1.000\n - right_knee: (336.28, 393.63) with score 1.000\n - left_ankle: (428.03, 347.03) with score 1.000\n - right_ankle: (434.31, 510.29) with score 0.992\nFound person at [-0.88, 48.03, 182.29, 381.19] with score 0.787\n - nose: (72.50, 83.26) with score 0.606\n - left_eye: (81.11, 76.66) with score 0.627\n - right_eye: (64.49, 77.73) with score 0.641\n - left_ear: (95.29, 78.63) with score 0.513\n - left_shoulder: (114.15, 109.26) with score 0.918\n - right_shoulder: (46.66, 115.12) with score 0.988\n - left_elbow: (131.40, 160.25) with score 0.351\n - right_elbow: (26.67, 159.11) with score 0.934\n - right_wrist: (6.60, 201.80) with score 0.681\n - left_hip: (110.48, 206.96) with score 0.998\n - right_hip: (60.89, 199.41) with score 0.997\n - left_knee: (118.23, 272.23) with score 0.999\n - right_knee: (66.52, 273.32) with score 0.994\n - left_ankle: (129.82, 346.46) with score 0.999\n - right_ankle: (60.40, 349.13) with score 0.995\nFound person at [512.82, 31.30, 662.28, 314.57] with score 0.451\n - nose: (550.07, 74.26) with score 0.766\n - left_eye: (558.96, 67.14) with score 0.955\n - right_eye: (541.52, 68.23) with score 0.783\n - left_ear: (575.04, 67.61) with score 0.952\n - left_shoulder: (589.39, 102.33) with score 0.996\n - right_shoulder: (511.02, 103.00) with score 0.699\n - left_elbow: (626.71, 148.71) with score 0.997\n - left_wrist: (633.15, 200.33) with score 0.982\n - left_hip: (580.00, 181.21) with score 0.994\n - right_hip: (524.41, 184.62) with score 0.849\n - left_knee: (594.99, 244.95) with score 0.977\n - right_knee: (533.72, 246.37) with score 0.504\n - left_ankle: (598.47, 294.18) with score 0.844\n```\n\n</details>" | {"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/RTMO-m | null | [
"transformers.js",
"onnx",
"rtmo",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:12:46+00:00 | [] | [] | TAGS
#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us
|
URL with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/RTMO-m'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-m'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-m'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] |
null | transformers.js | ERROR: type should be string, got "\n\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.\n\n## Usage (Transformers.js)\n\nIf you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:\n```bash\nnpm i @xenova/transformers\n```\n\n**Example:** Perform pose-estimation w/ `Xenova/RTMO-l`.\n\n```js\nimport { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';\n\n// Load model and processor\nconst model_id = 'Xenova/RTMO-l';\nconst model = await AutoModel.from_pretrained(model_id);\nconst processor = await AutoProcessor.from_pretrained(model_id);\n\n// Read image and run processor\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';\nconst image = await RawImage.read(url);\nconst { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);\n\n// Predict bounding boxes and keypoints\nconst { dets, keypoints } = await model({ input: pixel_values });\n\n// Select the first image\nconst predicted_boxes = dets.tolist()[0];\nconst predicted_points = keypoints.tolist()[0];\nconst [height, width] = original_sizes[0];\nconst [resized_height, resized_width] = reshaped_input_sizes[0];\n\n// Compute scale values\nconst xScale = width / resized_width;\nconst yScale = height / resized_height;\n\n// Define thresholds\nconst point_threshold = 0.3;\nconst box_threshold = 0.3;\n\n// Display results\nfor (let i = 0; i < predicted_boxes.length; ++i) {\n const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];\n if (box_score < box_threshold) continue;\n\n const x1 = (xmin * xScale).toFixed(2);\n const y1 = (ymin * yScale).toFixed(2);\n const x2 = (xmax * xScale).toFixed(2);\n const y2 = (ymax * yScale).toFixed(2);\n\n console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`)\n const points = predicted_points[i]; // of shape [17, 3]\n for (let id = 0; id < points.length; ++id) {\n const label = model.config.id2label[id];\n const [x, y, point_score] = points[id];\n if (point_score < point_threshold) continue;\n console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);\n }\n}\n```\n\n<details>\n\n<summary>See example output</summary>\n\n```\nFound person at [400.13, 66.05, 657.48, 496.67] with score 0.978\n - nose: (520.40, 118.17) with score 0.445\n - left_eye: (531.83, 111.10) with score 0.350\n - left_shoulder: (559.65, 168.66) with score 0.999\n - right_shoulder: (469.70, 160.04) with score 0.999\n - left_elbow: (573.20, 237.82) with score 1.000\n - right_elbow: (438.51, 218.06) with score 0.999\n - left_wrist: (604.74, 308.75) with score 0.999\n - right_wrist: (495.52, 219.24) with score 0.999\n - left_hip: (537.36, 306.24) with score 1.000\n - right_hip: (477.61, 314.79) with score 0.998\n - left_knee: (576.44, 360.67) with score 1.000\n - right_knee: (500.26, 448.33) with score 0.997\n - left_ankle: (575.94, 461.43) with score 0.998\n - right_ankle: (525.18, 436.10) with score 0.996\nFound person at [84.74, 11.57, 524.53, 535.62] with score 0.970\n - left_shoulder: (240.00, 106.15) with score 0.998\n - right_shoulder: (230.72, 131.27) with score 0.999\n - left_elbow: (319.58, 164.42) with score 0.999\n - right_elbow: (232.16, 226.10) with score 1.000\n - left_wrist: (390.95, 220.65) with score 0.999\n - right_wrist: (157.61, 227.93) with score 0.999\n - left_hip: (363.29, 249.14) with score 1.000\n - right_hip: (337.65, 250.50) with score 1.000\n - left_knee: (297.35, 368.55) with score 1.000\n - right_knee: (328.29, 390.84) with score 1.000\n - left_ankle: (433.81, 343.83) with score 0.999\n - right_ankle: (452.74, 504.60) with score 0.995\nFound person at [-4.11, 53.42, 174.91, 372.64] with score 0.644\n - nose: (74.67, 84.38) with score 0.375\n - left_shoulder: (114.29, 113.60) with score 0.991\n - right_shoulder: (44.21, 117.73) with score 0.989\n - left_elbow: (124.69, 159.42) with score 0.978\n - right_elbow: (26.54, 154.78) with score 0.995\n - left_wrist: (132.86, 168.78) with score 0.957\n - right_wrist: (6.44, 195.67) with score 0.986\n - left_hip: (98.90, 199.49) with score 0.978\n - right_hip: (62.77, 200.49) with score 0.976\n - left_knee: (111.91, 277.06) with score 0.998\n - right_knee: (65.08, 276.40) with score 0.999\n - left_ankle: (128.95, 344.65) with score 0.973\n - right_ankle: (63.55, 345.60) with score 0.992\nFound person at [511.40, 32.53, 658.71, 345.63] with score 0.384\n - nose: (554.88, 74.25) with score 0.796\n - left_eye: (563.64, 68.39) with score 0.716\n - right_eye: (547.38, 68.22) with score 0.542\n - left_ear: (575.42, 72.40) with score 0.324\n - left_shoulder: (576.47, 105.27) with score 0.999\n - right_shoulder: (531.19, 105.55) with score 0.956\n - left_elbow: (623.35, 151.54) with score 0.999\n - right_elbow: (549.79, 144.36) with score 0.387\n - left_wrist: (631.33, 198.37) with score 0.991\n - right_wrist: (547.36, 162.58) with score 0.486\n - left_hip: (578.36, 192.67) with score 0.989\n - right_hip: (555.21, 188.00) with score 0.925\n - left_knee: (604.56, 239.95) with score 0.977\n - right_knee: (545.23, 221.37) with score 0.952\n - left_ankle: (587.82, 323.26) with score 0.401\n - right_ankle: (546.77, 322.69) with score 0.846\n```\n\n</details>" | {"license": "apache-2.0", "library_name": "transformers.js", "tags": ["pose-estimation"]} | Xenova/RTMO-l | null | [
"transformers.js",
"onnx",
"rtmo",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:12:49+00:00 | [] | [] | TAGS
#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us
|
URL with ONNX weights to be compatible with URL.
## Usage (URL)
If you haven't already, you can install the URL JavaScript library from NPM using:
Example: Perform pose-estimation w/ 'Xenova/RTMO-l'.
<details>
<summary>See example output</summary>
</details> | [
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-l'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] | [
"TAGS\n#transformers.js #onnx #rtmo #pose-estimation #license-apache-2.0 #region-us \n",
"## Usage (URL)\n\nIf you haven't already, you can install the URL JavaScript library from NPM using:\n\n\nExample: Perform pose-estimation w/ 'Xenova/RTMO-l'.\n\n\n\n<details>\n\n<summary>See example output</summary>\n\n\n\n</details>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | Anas989898/llama-3-8b-it-codeact-v0.1 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:13:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | null | ## Model
 | {"tags": ["stable-diffusion", "text-to-image", "StableDiffusionPipeline", "lora"]} | fearvel/lloyd-de-saloum-pony-v1 | null | [
"stable-diffusion",
"text-to-image",
"StableDiffusionPipeline",
"lora",
"region:us"
] | null | 2024-04-26T11:15:18+00:00 | [] | [] | TAGS
#stable-diffusion #text-to-image #StableDiffusionPipeline #lora #region-us
| ## Model
!pipeline | [
"## Model\n\n!pipeline"
] | [
"TAGS\n#stable-diffusion #text-to-image #StableDiffusionPipeline #lora #region-us \n",
"## Model\n\n!pipeline"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ustunek/gpt-2-doctor-eng | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:17:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-voxconverse-en
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/voxconverse dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1250
- Der: 0.8257
- False Alarm: 0.3733
- Missed Detection: 0.3995
- Confusion: 0.0528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.9302 | 1.0 | 791 | 0.9903 | 0.6790 | 0.5013 | 0.0965 | 0.0812 |
| 0.8848 | 2.0 | 1582 | 1.0536 | 0.7965 | 0.3991 | 0.3409 | 0.0565 |
| 0.8513 | 3.0 | 2373 | 1.0884 | 0.8114 | 0.4017 | 0.3528 | 0.0569 |
| 0.7926 | 4.0 | 3164 | 1.1292 | 0.8378 | 0.3660 | 0.4219 | 0.0500 |
| 0.8147 | 5.0 | 3955 | 1.1250 | 0.8257 | 0.3733 | 0.3995 | 0.0528 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/voxconverse"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-voxconverse-en", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-voxconverse-en | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/voxconverse",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:20:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/voxconverse #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-voxconverse-en
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/voxconverse dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1250
* Der: 0.8257
* False Alarm: 0.3733
* Missed Detection: 0.3995
* Confusion: 0.0528
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 5.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/voxconverse #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1340
- Precision: 0.9582
- Recall: 0.9500
- F1: 0.9541
- Accuracy: 0.9499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1595 | 0.5 | 7000 | 0.1539 | 0.9469 | 0.9377 | 0.9423 | 0.9375 |
| 0.1497 | 0.99 | 14000 | 0.1383 | 0.9549 | 0.9418 | 0.9483 | 0.9437 |
| 0.1185 | 1.49 | 21000 | 0.1314 | 0.9557 | 0.9464 | 0.9510 | 0.9467 |
| 0.1153 | 1.99 | 28000 | 0.1306 | 0.9553 | 0.9503 | 0.9528 | 0.9487 |
| 0.0977 | 2.49 | 35000 | 0.1340 | 0.9582 | 0.9500 | 0.9541 | 0.9499 |
| 0.0948 | 2.98 | 42000 | 0.1325 | 0.9584 | 0.9512 | 0.9548 | 0.9506 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]} | Sevixdd/bert-base-uncased-finetuned-ner | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:21:12+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-uncased-finetuned-ner
===============================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1340
* Precision: 0.9582
* Recall: 0.9500
* F1: 0.9541
* Accuracy: 0.9499
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.3.0+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # ai-playground
The repo currently consists out of
- [forum-gpt/data-creation](/forum-gpt/data-creation/): a package for data creation and manipulation
- [forum-gpt/evaluation-app](/forum-gpt/evaluation-app/): a simple evaluation app
- [forum-gpt/training](forum-gpt/training/): saved axolotl training configurations
## Setup
Use Node `>= 20` with npm `>= 10`.
```bash
npm ci
```
## Quick start Evaluation App
Set `OPEN_API_KEY` in your environment variables.
You can set an arbitrary value like `foobar` in case you don't intend to use Open AI's GPT models, e.g. `export OPEN_API_KEY=foobar`.
Configure the models to chat with in [`bots.config.json`](/forum-gpt/evaluation-app/backend/bots.config.json)
```bash
npm run build
npm run start
```
Open app at [localhost:5173](http://localhost:5173/).
### Deploy a model in Runpod
The Evaluation App works against Open AI's API.
We recommend [`vllm`](https://github.com/vllm-project/vllm) for deploying own models.
A simple configuration may look like this:
- Docker Image Name: `vllm/vllm-openai:latest`
- Container Start Command: `--model mistralai/Mistral-7B-Instruct-v0.1`.
- The model name can be derived from [HuggingFace](https://huggingface.co/)
- In case you are using a private model, add an environment variable named `HUGGING_FACE_HUB_TOKEN` to your pod with your token
- Expose HTTP Ports: `8000`
- Disk sizes: Whatever is appropriate, e.g. 2x `50` GB
- Volume Mount Path: `/root/.cache/huggingface`.
- Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted
Use [this Runpod link](https://www.runpod.io/console/deploy?template=n338mcq81p) to start with a configuration for Mistral-7B-Instruct-v0.2 model.
You can use "Edit Pod Template" to adjust the template before using it.
Once the pod is started the first time, it will get a random id assigned by Runpod, e.g. `g9q3ycbfk2yorr`.
Configure the pod in [`bots.config.json`](/forum-gpt/evaluation-app/backend/bots.config.json)
- `id` must be unique between pods
- `type: runpod`
- `modelId` must be the same as used in the Container Start Command above
- `runpodId` is the id assigned by Runpod
In case of `Mistral` based models, disable the system prompt with `systemPrompt: null` as these models don't support it.
| {} | jfaltermeier/llama3-theia-workshop-johannes-with-config | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:21:54+00:00 | [] | [] | TAGS
#transformers #pytorch #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # ai-playground
The repo currently consists out of
- forum-gpt/data-creation: a package for data creation and manipulation
- forum-gpt/evaluation-app: a simple evaluation app
- forum-gpt/training: saved axolotl training configurations
## Setup
Use Node '>= 20' with npm '>= 10'.
## Quick start Evaluation App
Set 'OPEN_API_KEY' in your environment variables.
You can set an arbitrary value like 'foobar' in case you don't intend to use Open AI's GPT models, e.g. 'export OPEN_API_KEY=foobar'.
Configure the models to chat with in 'URL'
Open app at localhost:5173.
### Deploy a model in Runpod
The Evaluation App works against Open AI's API.
We recommend 'vllm' for deploying own models.
A simple configuration may look like this:
- Docker Image Name: 'vllm/vllm-openai:latest'
- Container Start Command: '--model mistralai/Mistral-7B-Instruct-v0.1'.
- The model name can be derived from HuggingFace
- In case you are using a private model, add an environment variable named 'HUGGING_FACE_HUB_TOKEN' to your pod with your token
- Expose HTTP Ports: '8000'
- Disk sizes: Whatever is appropriate, e.g. 2x '50' GB
- Volume Mount Path: '/root/.cache/huggingface'.
- Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted
Use this Runpod link to start with a configuration for Mistral-7B-Instruct-v0.2 model.
You can use "Edit Pod Template" to adjust the template before using it.
Once the pod is started the first time, it will get a random id assigned by Runpod, e.g. 'g9q3ycbfk2yorr'.
Configure the pod in 'URL'
- 'id' must be unique between pods
- 'type: runpod'
- 'modelId' must be the same as used in the Container Start Command above
- 'runpodId' is the id assigned by Runpod
In case of 'Mistral' based models, disable the system prompt with 'systemPrompt: null' as these models don't support it.
| [
"# ai-playground\n\nThe repo currently consists out of\n\n- forum-gpt/data-creation: a package for data creation and manipulation\n- forum-gpt/evaluation-app: a simple evaluation app\n- forum-gpt/training: saved axolotl training configurations",
"## Setup\n\nUse Node '>= 20' with npm '>= 10'.",
"## Quick start Evaluation App\n\nSet 'OPEN_API_KEY' in your environment variables.\nYou can set an arbitrary value like 'foobar' in case you don't intend to use Open AI's GPT models, e.g. 'export OPEN_API_KEY=foobar'.\n\nConfigure the models to chat with in 'URL'\n\n\n\nOpen app at localhost:5173.",
"### Deploy a model in Runpod\n\nThe Evaluation App works against Open AI's API.\nWe recommend 'vllm' for deploying own models.\n\nA simple configuration may look like this:\n\n- Docker Image Name: 'vllm/vllm-openai:latest'\n- Container Start Command: '--model mistralai/Mistral-7B-Instruct-v0.1'.\n - The model name can be derived from HuggingFace\n - In case you are using a private model, add an environment variable named 'HUGGING_FACE_HUB_TOKEN' to your pod with your token\n- Expose HTTP Ports: '8000'\n- Disk sizes: Whatever is appropriate, e.g. 2x '50' GB\n- Volume Mount Path: '/root/.cache/huggingface'.\n - Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted\n\nUse this Runpod link to start with a configuration for Mistral-7B-Instruct-v0.2 model.\nYou can use \"Edit Pod Template\" to adjust the template before using it.\n\nOnce the pod is started the first time, it will get a random id assigned by Runpod, e.g. 'g9q3ycbfk2yorr'.\n\nConfigure the pod in 'URL'\n\n- 'id' must be unique between pods\n- 'type: runpod'\n- 'modelId' must be the same as used in the Container Start Command above\n- 'runpodId' is the id assigned by Runpod\n\nIn case of 'Mistral' based models, disable the system prompt with 'systemPrompt: null' as these models don't support it."
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ai-playground\n\nThe repo currently consists out of\n\n- forum-gpt/data-creation: a package for data creation and manipulation\n- forum-gpt/evaluation-app: a simple evaluation app\n- forum-gpt/training: saved axolotl training configurations",
"## Setup\n\nUse Node '>= 20' with npm '>= 10'.",
"## Quick start Evaluation App\n\nSet 'OPEN_API_KEY' in your environment variables.\nYou can set an arbitrary value like 'foobar' in case you don't intend to use Open AI's GPT models, e.g. 'export OPEN_API_KEY=foobar'.\n\nConfigure the models to chat with in 'URL'\n\n\n\nOpen app at localhost:5173.",
"### Deploy a model in Runpod\n\nThe Evaluation App works against Open AI's API.\nWe recommend 'vllm' for deploying own models.\n\nA simple configuration may look like this:\n\n- Docker Image Name: 'vllm/vllm-openai:latest'\n- Container Start Command: '--model mistralai/Mistral-7B-Instruct-v0.1'.\n - The model name can be derived from HuggingFace\n - In case you are using a private model, add an environment variable named 'HUGGING_FACE_HUB_TOKEN' to your pod with your token\n- Expose HTTP Ports: '8000'\n- Disk sizes: Whatever is appropriate, e.g. 2x '50' GB\n- Volume Mount Path: '/root/.cache/huggingface'.\n - Recommended mount when using vllm images to avoid downloading the model whenever the pod is restarted\n\nUse this Runpod link to start with a configuration for Mistral-7B-Instruct-v0.2 model.\nYou can use \"Edit Pod Template\" to adjust the template before using it.\n\nOnce the pod is started the first time, it will get a random id assigned by Runpod, e.g. 'g9q3ycbfk2yorr'.\n\nConfigure the pod in 'URL'\n\n- 'id' must be unique between pods\n- 'type: runpod'\n- 'modelId' must be the same as used in the Container Start Command above\n- 'runpodId' is the id assigned by Runpod\n\nIn case of 'Mistral' based models, disable the system prompt with 'systemPrompt: null' as these models don't support it."
] |
text-generation | transformers |
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp" width="400"></p>
# Introduction
**Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1).
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
```
<|im_start|>system
Du bist ein hilfreicher Assistent.<|im_end|>
<|im_start|>user
Wer bist du?<|im_end|>
<|im_start|>assistant
Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
```python
model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Bjรถrn Plรผster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani).
Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com).
Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/).
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| {"library_name": "transformers", "tags": []} | mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-GPTQ | null | [
"transformers",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T11:22:05+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
*There currently is an issue with the model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end. Please use with 'skip_special_tokens=true'. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp" width="400"></p>
# Introduction
Llama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo here (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This prompt is available as a chat template, which means you can format messages using the
'tokenizer.apply_chat_template()' method:
When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.
Development of Llama 3 DiscoLM German 8b was sponsored by ellamind.
Compute was sponsored generously by sysGen GmbH.
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| [
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] | [
"TAGS\n#transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental/resolve/main/disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | nextab/Athena-v1.0-sft | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:24:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | The wonderful ToolsBaer OLM to EML Conversion software makes importing Mac Outlook OLM files into EML file formats simple and quick. Software that converts OLM to EML files can handle OLM files of any size or type effortlessly. One of its greatest benefits is its ability to easily import OLM files into a specific EML file without any problems or additional software installation. The tool's basic design makes it easy to use even by non-technical persons. All emails and attachments from OLM files have been properly converted to EML files and are 100% accurate. Without changing any files, the transfer is finished by the OLM to EML Conversion. There is a guarantee that the application will update every OLM email feature, including to, cc, bcc, from, sent, date, time, and others. Users can use the software's trial version to convert the first 10 emails from each folder. This application works with all Windows 11,10, 8, 8.1, 7, XP, and Vista versions. It costs free to install and use the program for everyone.
Read MOre:- http://www.toolsbaer.com/olm-to-eml-conversion/ | {} | madelineoliver/ToolsBaer-OLM-to-EML-Conversion | null | [
"region:us"
] | null | 2024-04-26T11:25:45+00:00 | [] | [] | TAGS
#region-us
| The wonderful ToolsBaer OLM to EML Conversion software makes importing Mac Outlook OLM files into EML file formats simple and quick. Software that converts OLM to EML files can handle OLM files of any size or type effortlessly. One of its greatest benefits is its ability to easily import OLM files into a specific EML file without any problems or additional software installation. The tool's basic design makes it easy to use even by non-technical persons. All emails and attachments from OLM files have been properly converted to EML files and are 100% accurate. Without changing any files, the transfer is finished by the OLM to EML Conversion. There is a guarantee that the application will update every OLM email feature, including to, cc, bcc, from, sent, date, time, and others. Users can use the software's trial version to convert the first 10 emails from each folder. This application works with all Windows 11,10, 8, 8.1, 7, XP, and Vista versions. It costs free to install and use the program for everyone.
Read MOre:- URL | [] | [
"TAGS\n#region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "lmsys/vicuna-7b-v1.5"} | tt1225/aic24-track2-multiview-videollava-7b-lora | null | [
"peft",
"safetensors",
"llava_llama",
"arxiv:1910.09700",
"base_model:lmsys/vicuna-7b-v1.5",
"4-bit",
"region:us"
] | null | 2024-04-26T11:26:38+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #llava_llama #arxiv-1910.09700 #base_model-lmsys/vicuna-7b-v1.5 #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #llava_llama #arxiv-1910.09700 #base_model-lmsys/vicuna-7b-v1.5 #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jd0g/Mistral-7B-NLI-v0.3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:27:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tutuhu/style6 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:27:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Approach:
The TextSimpleCategoryLLM model is a GPT-2 based language model trained to generate text responses based on input prompts, focusing on a simple categorization task. The model utilizes the GPT-2 architecture, fine-tuned on a dataset consisting of text prompts paired with corresponding categories. During training, the model learns to generate text that aligns with the specified category, enabling it to provide relevant information within the given context. This approach facilitates tasks such as text completion and question answering within defined categories, offering users a straightforward and effective tool for generating context-aware text responses.
Trained with:
label category
count 32998 32998
unique 31872 3
freq 20 1299 | {"language": ["en"], "license": "apache-2.0", "datasets": ["AkilanSelvam/text-simple-categorization"]} | AkilanSelvam/spinsnow-problem-categorizer | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:AkilanSelvam/text-simple-categorization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:31:32+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gpt2 #text-generation #en #dataset-AkilanSelvam/text-simple-categorization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Approach:
The TextSimpleCategoryLLM model is a GPT-2 based language model trained to generate text responses based on input prompts, focusing on a simple categorization task. The model utilizes the GPT-2 architecture, fine-tuned on a dataset consisting of text prompts paired with corresponding categories. During training, the model learns to generate text that aligns with the specified category, enabling it to provide relevant information within the given context. This approach facilitates tasks such as text completion and question answering within defined categories, offering users a straightforward and effective tool for generating context-aware text responses.
Trained with:
label category
count 32998 32998
unique 31872 3
freq 20 1299 | [] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #en #dataset-AkilanSelvam/text-simple-categorization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Quantizations of https://huggingface.co/jeiku/Foundation_3B
# From original readme
This is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on https://huggingface.co/jeiku/Rosa_v3_3B
This should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles.
| {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Foundation_3B"], "pipeline_tag": "text-generation", "inference": false} | duyntnet/Foundation_3B-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"Foundation_3B",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-26T11:33:19+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #imatrix #Foundation_3B #text-generation #en #license-other #region-us
| Quantizations of URL
# From original readme
This is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on URL
This should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles.
| [
"# From original readme\n\nThis is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on URL\n\nThis should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles."
] | [
"TAGS\n#transformers #gguf #imatrix #Foundation_3B #text-generation #en #license-other #region-us \n",
"# From original readme\n\nThis is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on URL\n\nThis should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles."
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4570
- Der: 0.1803
- False Alarm: 0.0556
- Missed Detection: 0.0731
- Confusion: 0.0516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4257 | 1.0 | 362 | 0.4789 | 0.1918 | 0.0573 | 0.0786 | 0.0559 |
| 0.3889 | 2.0 | 724 | 0.4660 | 0.1866 | 0.0556 | 0.0760 | 0.0549 |
| 0.3758 | 3.0 | 1086 | 0.4587 | 0.1807 | 0.0548 | 0.0755 | 0.0503 |
| 0.3643 | 4.0 | 1448 | 0.4564 | 0.1805 | 0.0555 | 0.0734 | 0.0515 |
| 0.3511 | 5.0 | 1810 | 0.4570 | 0.1803 | 0.0556 | 0.0731 | 0.0516 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:37:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng
============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4570
* Der: 0.1803
* False Alarm: 0.0556
* Missed Detection: 0.0731
* Confusion: 0.0516
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 5.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | antonyo94/Enlighten_Instruct | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-26T11:38:22+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers |

## VAGO solutions Llama-3-SauerkrautLM-8b-Instruct
Introducing **Llama-3-SauerkrautLM-8b-Instruct** โ our Sauerkraut version of the powerful [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)!
The model **Llama-3-SauerkrautLM-8b-Instruct** is a **joint effort** between **VAGO Solutions** and **Hyperspace.ai.**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all Llama-3-SauerkrautLM-8b-Instruct](#all-Llama-3-SauerkrautLM-8b-Instruct)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-llama-3-8B-Instruct
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama-3-SauerkrautLM-8b-Instruct | [Link](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF) | coming soon |
## Model Details
**SauerkrautLM-llama-3-8B-Instruct**
- **Model Type:** Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Language(s):** German, English
- **License:** [meta-llama](https://llama.meta.com/llama3/license)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data.
- LaserRMT version coming soon
**We improved the model's capabilities noticably by feeding it with curated German data.**
### Prompt Template:
**English:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**German:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Example output of german language:
```
Prompt: Erklรคre mir die Vorteile fรผr das Fine-Tuning von Meta's LLama Modellen.
Antwort:
Sehr gern!
Fine-tuning von Meta's LLaMA-Modellen ist eine sehr effektive Mรถglichkeit, um Ihre Anwendung oder Ihr Projekt mit einer hochentwickelten Sprachverarbeitung auszustatten.
Die Vorteile des Fine-Tunings von LLaMA-Modellen sind:
Adaptation an spezielle Aufgaben: Durch das Fine-Tuning kรถnnen Sie die Modelle auf Ihre speziellen Bedรผrfnisse und Anforderungen anpassen.
Dies ermรถglicht es Ihnen, die Leistung der Modelle in Ihren Anwendungen zu verbessern.
Verbesserte Genauigkeit: Die fine-tuned Modelle lernen, sich an die spezifischen Merkmale Ihres Datenbestands anzupassen, was zu einer besseren Genauigkeit bei der Textklassifizierung, -generierung, -summarisierung usw. fรผhrt.
Anpassung an Branche oder Domain: Wenn Sie beispielsweise ein Medizinprojekt haben, kรถnnen Sie die Modelle auf medizinische Fachterminologie und -konzepte trainieren, um eine hรถhere Prรคzision bei der Analyse medizinischer Texte zu erzielen.
Kosten-Effektivitรคt: Im Vergleich zum Training neuer Modelle von Grund auf kann das Fine-Tuning viel kosteneffektiver sein, da Sie nur einen kleinen Teil des Trainingsdatums benรถtigen, um die Modelle an Ihre spezifischen Anforderungen anzupassen.
Schnellere Entwicklung: Das Fine-Tuning ermรถglicht es Entwicklern, schnell neue Funktionen und Features in ihre Anwendungen einzufรผgen, ohne dass sie lange warten mรผssen, bis neue Modelle trainiert werden.
Bessere Interpretierbarkeit: Da die fine-tuned Modelle auf Ihre spezifischen Daten bestimmt wurden, kรถnnen Sie leichter verstehen, warum bestimmte Entscheidungen getroffen werden, was wichtig ist, wenn es um Transparenz und Verantwortlichkeit geht.
Insgesamt bietet das Fine-Tuning von LLaMA-Modellen eine flexible und effektive Mรถglichkeit, um Ihre Anwendungen und Projekte durch die Integration von fortschrittlichen Sprachmodellen zu verbessern.
```
## Evaluation
**Open LLM Leaderboard:**
evaluated with lm-evaluation-benchmark-harness 0.4.2
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **74.57** |
| ARC (25-shot) | 74.66 |
| HellaSwag (10-shot) | 89.60 |
| MMLU (5-shot) | 66.55 |
| TruthfulQA (0-shot) | 66.32 |
| Winogrande (5-shot) | 80.98 |
| GSM8K (5-shot) | 69.29 |
**MT-Bench English**
```
########## First turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 1 8.15625
########## Second turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 2 7.65
########## Average ##########
score
model
Llama-3-SauerkrautLM-8b-Instruct 7.903125 *
```
* due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct
**MT-Bench German**
```
########## First turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 1 7.675
########## Second turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 2 7.6375
########## Average ##########
score
model
Llama-3-SauerkrautLM-8b-Instruct 7.65625
```
**German RAG LLM Evaluation**
```
| Task |Version|Metric|Value| |Stderr|
|------------------------------------------------------|------:|------|----:|---|-----:|
|all | |acc |0.905|ยฑ |0.0086|
|community:german_rag_eval:_average:0 | |acc |0.905|ยฑ |0.0086|
|community:german_rag_eval:choose_context_by_question:0| 0|acc |0.896|ยฑ |0.0097|
|community:german_rag_eval:choose_question_by_context:0| 0|acc |0.826|ยฑ |0.0120|
|community:german_rag_eval:context_question_match:0 | 0|acc |0.987|ยฑ |0.0036|
|community:german_rag_eval:question_answer_match:0 | 0|acc |0.911|ยฑ |0.0090|
```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for providing such valuable model to the Open-Source community.
Also many thanks to [bartowski](https://huggingface.co/bartowski) for super fast quantification of our Model in GGUF and EXL format.
| {"language": ["de", "en"], "license": "other", "tags": ["two stage dpo", "dpo", "hqq"], "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"} | mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-HQQ | null | [
"transformers",
"llama",
"text-generation",
"two stage dpo",
"dpo",
"hqq",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:40:05+00:00 | [] | [
"de",
"en"
] | TAGS
#transformers #llama #text-generation #two stage dpo #dpo #hqq #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| !SauerkrautLM
VAGO solutions Llama-3-SauerkrautLM-8b-Instruct
-----------------------------------------------
Introducing Llama-3-SauerkrautLM-8b-Instruct โ our Sauerkraut version of the powerful meta-llama/Meta-Llama-3-8B-Instruct!
The model Llama-3-SauerkrautLM-8b-Instruct is a joint effort between VAGO Solutions and URL.
* Aligned with DPO
Table of Contents
=================
1. Overview of all Llama-3-SauerkrautLM-8b-Instruct
2. Model Details
* Prompt template
* Training procedure
3. Evaluation
4. Disclaimer
5. Contact
6. Collaborations
7. Acknowledgement
All SauerkrautLM-llama-3-8B-Instruct
------------------------------------
Model Details
-------------
SauerkrautLM-llama-3-8B-Instruct
* Model Type: Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on meta-llama/Meta-Llama-3-8B-Instruct
* Language(s): German, English
* License: meta-llama
* Contact: VAGO solutions, URL
### Training procedure:
* We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data.
* LaserRMT version coming soon
We improved the model's capabilities noticably by feeding it with curated German data.
### Prompt Template:
English:
German:
### Example output of german language:
Evaluation
----------
Open LLM Leaderboard:
evaluated with lm-evaluation-benchmark-harness 0.4.2
MT-Bench English
* due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct
MT-Bench German
German RAG LLM Evaluation
Disclaimer
----------
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
Contact
-------
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
Collaborations
--------------
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer
Acknowledgement
---------------
Many thanks to Meta for providing such valuable model to the Open-Source community.
Also many thanks to bartowski for super fast quantification of our Model in GGUF and EXL format.
| [
"### Training procedure:\n\n\n* We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data.\n* LaserRMT version coming soon\n\n\nWe improved the model's capabilities noticably by feeding it with curated German data.",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\nevaluated with lm-evaluation-benchmark-harness 0.4.2\n\n\n\nMT-Bench English\n\n\n* due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct\n\n\nMT-Bench German\n\n\nGerman RAG LLM Evaluation\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Meta for providing such valuable model to the Open-Source community.\nAlso many thanks to bartowski for super fast quantification of our Model in GGUF and EXL format."
] | [
"TAGS\n#transformers #llama #text-generation #two stage dpo #dpo #hqq #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training procedure:\n\n\n* We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data.\n* LaserRMT version coming soon\n\n\nWe improved the model's capabilities noticably by feeding it with curated German data.",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\nevaluated with lm-evaluation-benchmark-harness 0.4.2\n\n\n\nMT-Bench English\n\n\n* due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct\n\n\nMT-Bench German\n\n\nGerman RAG LLM Evaluation\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Meta for providing such valuable model to the Open-Source community.\nAlso many thanks to bartowski for super fast quantification of our Model in GGUF and EXL format."
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng-2
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4666
- Der: 0.1814
- False Alarm: 0.0552
- Missed Detection: 0.0739
- Confusion: 0.0523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4548 | 1.0 | 181 | 0.4943 | 0.1966 | 0.0564 | 0.0811 | 0.0590 |
| 0.4171 | 2.0 | 362 | 0.4845 | 0.1951 | 0.0644 | 0.0754 | 0.0552 |
| 0.396 | 3.0 | 543 | 0.4633 | 0.1856 | 0.0502 | 0.0825 | 0.0529 |
| 0.3856 | 4.0 | 724 | 0.4609 | 0.1843 | 0.0571 | 0.0739 | 0.0534 |
| 0.3693 | 5.0 | 905 | 0.4639 | 0.1821 | 0.0531 | 0.0761 | 0.0528 |
| 0.3634 | 6.0 | 1086 | 0.4610 | 0.1821 | 0.0588 | 0.0716 | 0.0517 |
| 0.3655 | 7.0 | 1267 | 0.4638 | 0.1827 | 0.0566 | 0.0740 | 0.0521 |
| 0.3608 | 8.0 | 1448 | 0.4603 | 0.1814 | 0.0567 | 0.0732 | 0.0515 |
| 0.3545 | 9.0 | 1629 | 0.4645 | 0.1805 | 0.0530 | 0.0761 | 0.0514 |
| 0.3508 | 10.0 | 1810 | 0.4666 | 0.1814 | 0.0552 | 0.0739 | 0.0523 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-2", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:40:23+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng-2
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4666
* Der: 0.1814
* False Alarm: 0.0552
* Missed Detection: 0.0739
* Confusion: 0.0523
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 64
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-mu-23M-1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0024
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.973 | 0.7429 | 1000 | 0.3027 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "Llama2-mu-23M-1", "results": []}]} | HachiML/Llama2-mu-23M-1 | null | [
"transformers",
"safetensors",
"mu_llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:40:37+00:00 | [] | [] | TAGS
#transformers #safetensors #mu_llama #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| Llama2-mu-23M-1
===============
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3027
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0024
* train\_batch\_size: 192
* eval\_batch\_size: 192
* seed: 42
* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2500
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0024\n* train\\_batch\\_size: 192\n* eval\\_batch\\_size: 192\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mu_llama #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0024\n* train\\_batch\\_size: 192\n* eval\\_batch\\_size: 192\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2500\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - fatimaaa1/model1
<Gallery />
## Model description
These are fatimaaa1/model1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: fatimaaa1/model1/vae.
## Trigger words
You should use a bussiness card to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](fatimaaa1/model1/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a bussiness card", "widget": []} | fatimaaa1/model1 | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-26T11:44:10+00:00 | [] | [] | TAGS
#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - fatimaaa1/model1
<Gallery />
## Model description
These are fatimaaa1/model1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: fatimaaa1/model1/vae.
## Trigger words
You should use a bussiness card to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - fatimaaa1/model1\n\n<Gallery />",
"## Model description\n\nThese are fatimaaa1/model1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: fatimaaa1/model1/vae.",
"## Trigger words\n\nYou should use a bussiness card to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - fatimaaa1/model1\n\n<Gallery />",
"## Model description\n\nThese are fatimaaa1/model1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: fatimaaa1/model1/vae.",
"## Trigger words\n\nYou should use a bussiness card to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | null | # Volt Performance Erfahrungen Deutschland Hรถhle der lรถwen Offizielle Website, Kaufen
Volt Performance Erfahrungen Deutschland sind Nahrungsergรคnzungsmittel zur Steigerung der mรคnnlichen Vitalitรคt und sexuellen Leistungsfรคhigkeit. Sie werden aus einer Mischung natรผrlicher Inhaltsstoffe hergestellt, die fรผr ihre aphrodisierenden und energiesteigernden Eigenschaften bekannt sind. Zu den Hauptbestandteilen gehรถren typischerweise Krรคuter wie Tongkat Ali, Maca-Wurzel und Ginseng, die fรผr ihre Fรคhigkeit bekannt sind, die Libido zu verbessern, das Energieniveau zu steigern und das allgemeine Wohlbefinden zu unterstรผtzen.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen](https://capsules24x7.com/volt-performance)**
## Wie funktioniert das Nahrungsergรคnzungsmittel Volt Male Performance Capsules?
Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules bietet eine umfassende Mรถglichkeit, mit verschiedenen sexuellen Problemen bei Mรคnnern umzugehen. Die V-Kapseln sind reich an Aminosรคuren und Pflanzenkonzentraten, um verschiedene Zyklen im Kรถrper wiederzubeleben und zu erweitern, die das sexuelle Wohlbefinden fรถrdern. Einige Volt Male Performance-Kapseln unterstรผtzen die Durchblutung. L-Arginin und L-Citrullin fรถrdern die Bildung von Stickstoffmonoxid, das die Venen lockert und die Blutbildung fรถrdert. Mรคnner benรถtigen eine ausreichende Durchblutung des Penis, um ihre Erektionsfรคhigkeit aufrechtzuerhalten.
Antriebsschwรคche bereitet den meisten Mรคnnern Sorgen. Volt Male Performance-Kapseln kรถnnen Ihren Sexualtrieb durch die Verwendung normaler Gewรผrze wie Ashwagandha und Maca-Wurzelextrakten steigern. Es wurde klinisch nachgewiesen, dass die beiden alten Gewรผrze den Testosteronspiegel erhรถhen und so den gesunden Moxie-Spiegel wiederherstellen. Ein beeintrรคchtigter Widerstand kann Sie davon abhalten, im Bett zu handeln. L-Glutathion und verschiedene Nahrungsergรคnzungsmittel in den Volt Male Performance-Kapseln unterstรผtzen die Krebsprรคvention und verbessern die Spermienqualitรคt und die Erholung der Peniszellen.
Schlechte Schlafqualitรคt und Stress kรถnnen Sie mรผrrisch machen. Unkontrollierte Denkweisen halten Sie davon ab, harte Erektionen zu erreichen. Volt Male Performance Capsules enthรคlt nervenberuhigende Nahrungsergรคnzungsmittel, die die Entspannung und Erholung fรถrdern. Eine bessere Denkweise ermรถglicht es Ihnen, die Erektion, das sexuelle Verlangen und die Ausdauer zu erlangen, die Sie im Bett erwarten. Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules verwendet verschiedene Nahrungsergรคnzungsmittel, um Ihre gesamte sexuelle Gesundheit zu verbessern. Die Verwendung der normalen V-Kapseln kurbelt die Testosteronbildung an, verbessert die Durchblutung, stรคrkt die Penisgesundheit und wirkt sich auf Ihre gesamte sexuelle Gesundheit aus.
## Inhaltsstoffe der Volt Male Performance Kapseln
Volt Male Performance-Kapseln enthalten normale Inhaltsstoffe, die das mรคnnliche Wohlbefinden unterstรผtzen sollen. Die verschiedenen Aminosรคuren und Pflanzenextrakte sind in exakten Dosierungen erhรคltlich und bieten verschiedene medizinische Vorteile. Zu den wichtigsten Befestigungen gehรถren:
L-Arginin: Laut Hersteller des Nahrungsergรคnzungsmittels Volt Male Performance Capsules wird in der Definition normales L-Arginin verwendet, um die Stickoxidverschmelzung zu unterstรผtzen. Die Stickstoff- und Sauerstoffatome sind fรผr die Erweiterung der Venen von entscheidender Bedeutung und beeinflussen somit den Blutfluss. Verschiedene Untersuchungen zeigen, dass die regelmรครige Einnahme von L-Arginin Ihnen dabei helfen kann, auf Wunsch hochwertige Erektionen zu bekommen. Das semi-fundamentale Amino-รtzmittel kann Nebenwirkungen leichter, mittelschwerer Erektionsstรถrungen behandeln, ohne den Kunden Nachwirkungen zu verursachen.
L-Glutathion: Volt Male Performance Capsules enthรคlt 50 mg L-Glutathion pro Tag, um die kรถrpereigenen Krebsprรคventionswerte effektiv zu verbessern. Freie Extremisten kรถnnen รผber Wohlbefinden, Spermienqualitรคt und Testosteronbildung zweimal nachdenken. L-Glutathion wirkt sich auf die Endothelfรคhigkeit aus und kann ED-Probleme bei heranreifenden Mรคnnern lindern.
L-Citrullin: L-Citrullin fรถrdert die Blutentwicklung bei Mรคnnern. Das Aminooxid wird in Arginin umgewandelt, das die Stickoxidbildung ankurbelt. Ideale Stickoxidwerte wirken sich auf die innere Leistungsfรคhigkeit, den Geisteszustand und die Ruhe des Mannes aus. Der Hersteller von Volt Male Performance Capsules bezieht sich auf ein Konzentrat im Diary of Urology, das besagt, dass L-Citrullin bei Mรคnnern mit leichter ED auf die erektile Hรคrte wirken kann.
L-Methionin: L-Methionin ist ein starkes Nahrungsergรคnzungsmittel, das eine gesunde Verdauung fรถrdert und das Energieniveau steigert. Der Hersteller von Volt Male Performance Capsules weist darauf hin, dass Aminosรคuren die Entgiftung unterstรผtzen und einer รstrogenรผberproduktion bei Mรคnnern vorbeugen kรถnnen.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen](https://capsules24x7.com/volt-performance)** | {} | VKapseln475/VoltPerformance3 | null | [
"region:us"
] | null | 2024-04-26T11:44:43+00:00 | [] | [] | TAGS
#region-us
| # Volt Performance Erfahrungen Deutschland Hรถhle der lรถwen Offizielle Website, Kaufen
Volt Performance Erfahrungen Deutschland sind Nahrungsergรคnzungsmittel zur Steigerung der mรคnnlichen Vitalitรคt und sexuellen Leistungsfรคhigkeit. Sie werden aus einer Mischung natรผrlicher Inhaltsstoffe hergestellt, die fรผr ihre aphrodisierenden und energiesteigernden Eigenschaften bekannt sind. Zu den Hauptbestandteilen gehรถren typischerweise Krรคuter wie Tongkat Ali, Maca-Wurzel und Ginseng, die fรผr ihre Fรคhigkeit bekannt sind, die Libido zu verbessern, das Energieniveau zu steigern und das allgemeine Wohlbefinden zu unterstรผtzen.
## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen
## Wie funktioniert das Nahrungsergรคnzungsmittel Volt Male Performance Capsules?
Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules bietet eine umfassende Mรถglichkeit, mit verschiedenen sexuellen Problemen bei Mรคnnern umzugehen. Die V-Kapseln sind reich an Aminosรคuren und Pflanzenkonzentraten, um verschiedene Zyklen im Kรถrper wiederzubeleben und zu erweitern, die das sexuelle Wohlbefinden fรถrdern. Einige Volt Male Performance-Kapseln unterstรผtzen die Durchblutung. L-Arginin und L-Citrullin fรถrdern die Bildung von Stickstoffmonoxid, das die Venen lockert und die Blutbildung fรถrdert. Mรคnner benรถtigen eine ausreichende Durchblutung des Penis, um ihre Erektionsfรคhigkeit aufrechtzuerhalten.
Antriebsschwรคche bereitet den meisten Mรคnnern Sorgen. Volt Male Performance-Kapseln kรถnnen Ihren Sexualtrieb durch die Verwendung normaler Gewรผrze wie Ashwagandha und Maca-Wurzelextrakten steigern. Es wurde klinisch nachgewiesen, dass die beiden alten Gewรผrze den Testosteronspiegel erhรถhen und so den gesunden Moxie-Spiegel wiederherstellen. Ein beeintrรคchtigter Widerstand kann Sie davon abhalten, im Bett zu handeln. L-Glutathion und verschiedene Nahrungsergรคnzungsmittel in den Volt Male Performance-Kapseln unterstรผtzen die Krebsprรคvention und verbessern die Spermienqualitรคt und die Erholung der Peniszellen.
Schlechte Schlafqualitรคt und Stress kรถnnen Sie mรผrrisch machen. Unkontrollierte Denkweisen halten Sie davon ab, harte Erektionen zu erreichen. Volt Male Performance Capsules enthรคlt nervenberuhigende Nahrungsergรคnzungsmittel, die die Entspannung und Erholung fรถrdern. Eine bessere Denkweise ermรถglicht es Ihnen, die Erektion, das sexuelle Verlangen und die Ausdauer zu erlangen, die Sie im Bett erwarten. Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules verwendet verschiedene Nahrungsergรคnzungsmittel, um Ihre gesamte sexuelle Gesundheit zu verbessern. Die Verwendung der normalen V-Kapseln kurbelt die Testosteronbildung an, verbessert die Durchblutung, stรคrkt die Penisgesundheit und wirkt sich auf Ihre gesamte sexuelle Gesundheit aus.
## Inhaltsstoffe der Volt Male Performance Kapseln
Volt Male Performance-Kapseln enthalten normale Inhaltsstoffe, die das mรคnnliche Wohlbefinden unterstรผtzen sollen. Die verschiedenen Aminosรคuren und Pflanzenextrakte sind in exakten Dosierungen erhรคltlich und bieten verschiedene medizinische Vorteile. Zu den wichtigsten Befestigungen gehรถren:
L-Arginin: Laut Hersteller des Nahrungsergรคnzungsmittels Volt Male Performance Capsules wird in der Definition normales L-Arginin verwendet, um die Stickoxidverschmelzung zu unterstรผtzen. Die Stickstoff- und Sauerstoffatome sind fรผr die Erweiterung der Venen von entscheidender Bedeutung und beeinflussen somit den Blutfluss. Verschiedene Untersuchungen zeigen, dass die regelmรครige Einnahme von L-Arginin Ihnen dabei helfen kann, auf Wunsch hochwertige Erektionen zu bekommen. Das semi-fundamentale Amino-รtzmittel kann Nebenwirkungen leichter, mittelschwerer Erektionsstรถrungen behandeln, ohne den Kunden Nachwirkungen zu verursachen.
L-Glutathion: Volt Male Performance Capsules enthรคlt 50 mg L-Glutathion pro Tag, um die kรถrpereigenen Krebsprรคventionswerte effektiv zu verbessern. Freie Extremisten kรถnnen รผber Wohlbefinden, Spermienqualitรคt und Testosteronbildung zweimal nachdenken. L-Glutathion wirkt sich auf die Endothelfรคhigkeit aus und kann ED-Probleme bei heranreifenden Mรคnnern lindern.
L-Citrullin: L-Citrullin fรถrdert die Blutentwicklung bei Mรคnnern. Das Aminooxid wird in Arginin umgewandelt, das die Stickoxidbildung ankurbelt. Ideale Stickoxidwerte wirken sich auf die innere Leistungsfรคhigkeit, den Geisteszustand und die Ruhe des Mannes aus. Der Hersteller von Volt Male Performance Capsules bezieht sich auf ein Konzentrat im Diary of Urology, das besagt, dass L-Citrullin bei Mรคnnern mit leichter ED auf die erektile Hรคrte wirken kann.
L-Methionin: L-Methionin ist ein starkes Nahrungsergรคnzungsmittel, das eine gesunde Verdauung fรถrdert und das Energieniveau steigert. Der Hersteller von Volt Male Performance Capsules weist darauf hin, dass Aminosรคuren die Entgiftung unterstรผtzen und einer รstrogenรผberproduktion bei Mรคnnern vorbeugen kรถnnen.
## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen | [
"# Volt Performance Erfahrungen Deutschland Hรถhle der lรถwen Offizielle Website, Kaufen\n\nVolt Performance Erfahrungen Deutschland sind Nahrungsergรคnzungsmittel zur Steigerung der mรคnnlichen Vitalitรคt und sexuellen Leistungsfรคhigkeit. Sie werden aus einer Mischung natรผrlicher Inhaltsstoffe hergestellt, die fรผr ihre aphrodisierenden und energiesteigernden Eigenschaften bekannt sind. Zu den Hauptbestandteilen gehรถren typischerweise Krรคuter wie Tongkat Ali, Maca-Wurzel und Ginseng, die fรผr ihre Fรคhigkeit bekannt sind, die Libido zu verbessern, das Energieniveau zu steigern und das allgemeine Wohlbefinden zu unterstรผtzen.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen",
"## Wie funktioniert das Nahrungsergรคnzungsmittel Volt Male Performance Capsules?\nDas Nahrungsergรคnzungsmittel Volt Male Performance Capsules bietet eine umfassende Mรถglichkeit, mit verschiedenen sexuellen Problemen bei Mรคnnern umzugehen. Die V-Kapseln sind reich an Aminosรคuren und Pflanzenkonzentraten, um verschiedene Zyklen im Kรถrper wiederzubeleben und zu erweitern, die das sexuelle Wohlbefinden fรถrdern. Einige Volt Male Performance-Kapseln unterstรผtzen die Durchblutung. L-Arginin und L-Citrullin fรถrdern die Bildung von Stickstoffmonoxid, das die Venen lockert und die Blutbildung fรถrdert. Mรคnner benรถtigen eine ausreichende Durchblutung des Penis, um ihre Erektionsfรคhigkeit aufrechtzuerhalten.\n\nAntriebsschwรคche bereitet den meisten Mรคnnern Sorgen. Volt Male Performance-Kapseln kรถnnen Ihren Sexualtrieb durch die Verwendung normaler Gewรผrze wie Ashwagandha und Maca-Wurzelextrakten steigern. Es wurde klinisch nachgewiesen, dass die beiden alten Gewรผrze den Testosteronspiegel erhรถhen und so den gesunden Moxie-Spiegel wiederherstellen. Ein beeintrรคchtigter Widerstand kann Sie davon abhalten, im Bett zu handeln. L-Glutathion und verschiedene Nahrungsergรคnzungsmittel in den Volt Male Performance-Kapseln unterstรผtzen die Krebsprรคvention und verbessern die Spermienqualitรคt und die Erholung der Peniszellen.\n\nSchlechte Schlafqualitรคt und Stress kรถnnen Sie mรผrrisch machen. Unkontrollierte Denkweisen halten Sie davon ab, harte Erektionen zu erreichen. Volt Male Performance Capsules enthรคlt nervenberuhigende Nahrungsergรคnzungsmittel, die die Entspannung und Erholung fรถrdern. Eine bessere Denkweise ermรถglicht es Ihnen, die Erektion, das sexuelle Verlangen und die Ausdauer zu erlangen, die Sie im Bett erwarten. Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules verwendet verschiedene Nahrungsergรคnzungsmittel, um Ihre gesamte sexuelle Gesundheit zu verbessern. Die Verwendung der normalen V-Kapseln kurbelt die Testosteronbildung an, verbessert die Durchblutung, stรคrkt die Penisgesundheit und wirkt sich auf Ihre gesamte sexuelle Gesundheit aus.",
"## Inhaltsstoffe der Volt Male Performance Kapseln\nVolt Male Performance-Kapseln enthalten normale Inhaltsstoffe, die das mรคnnliche Wohlbefinden unterstรผtzen sollen. Die verschiedenen Aminosรคuren und Pflanzenextrakte sind in exakten Dosierungen erhรคltlich und bieten verschiedene medizinische Vorteile. Zu den wichtigsten Befestigungen gehรถren:\n\nL-Arginin: Laut Hersteller des Nahrungsergรคnzungsmittels Volt Male Performance Capsules wird in der Definition normales L-Arginin verwendet, um die Stickoxidverschmelzung zu unterstรผtzen. Die Stickstoff- und Sauerstoffatome sind fรผr die Erweiterung der Venen von entscheidender Bedeutung und beeinflussen somit den Blutfluss. Verschiedene Untersuchungen zeigen, dass die regelmรครige Einnahme von L-Arginin Ihnen dabei helfen kann, auf Wunsch hochwertige Erektionen zu bekommen. Das semi-fundamentale Amino-รtzmittel kann Nebenwirkungen leichter, mittelschwerer Erektionsstรถrungen behandeln, ohne den Kunden Nachwirkungen zu verursachen.\n\nL-Glutathion: Volt Male Performance Capsules enthรคlt 50 mg L-Glutathion pro Tag, um die kรถrpereigenen Krebsprรคventionswerte effektiv zu verbessern. Freie Extremisten kรถnnen รผber Wohlbefinden, Spermienqualitรคt und Testosteronbildung zweimal nachdenken. L-Glutathion wirkt sich auf die Endothelfรคhigkeit aus und kann ED-Probleme bei heranreifenden Mรคnnern lindern.\n\nL-Citrullin: L-Citrullin fรถrdert die Blutentwicklung bei Mรคnnern. Das Aminooxid wird in Arginin umgewandelt, das die Stickoxidbildung ankurbelt. Ideale Stickoxidwerte wirken sich auf die innere Leistungsfรคhigkeit, den Geisteszustand und die Ruhe des Mannes aus. Der Hersteller von Volt Male Performance Capsules bezieht sich auf ein Konzentrat im Diary of Urology, das besagt, dass L-Citrullin bei Mรคnnern mit leichter ED auf die erektile Hรคrte wirken kann.\n\nL-Methionin: L-Methionin ist ein starkes Nahrungsergรคnzungsmittel, das eine gesunde Verdauung fรถrdert und das Energieniveau steigert. Der Hersteller von Volt Male Performance Capsules weist darauf hin, dass Aminosรคuren die Entgiftung unterstรผtzen und einer รstrogenรผberproduktion bei Mรคnnern vorbeugen kรถnnen.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen"
] | [
"TAGS\n#region-us \n",
"# Volt Performance Erfahrungen Deutschland Hรถhle der lรถwen Offizielle Website, Kaufen\n\nVolt Performance Erfahrungen Deutschland sind Nahrungsergรคnzungsmittel zur Steigerung der mรคnnlichen Vitalitรคt und sexuellen Leistungsfรคhigkeit. Sie werden aus einer Mischung natรผrlicher Inhaltsstoffe hergestellt, die fรผr ihre aphrodisierenden und energiesteigernden Eigenschaften bekannt sind. Zu den Hauptbestandteilen gehรถren typischerweise Krรคuter wie Tongkat Ali, Maca-Wurzel und Ginseng, die fรผr ihre Fรคhigkeit bekannt sind, die Libido zu verbessern, das Energieniveau zu steigern und das allgemeine Wohlbefinden zu unterstรผtzen.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen",
"## Wie funktioniert das Nahrungsergรคnzungsmittel Volt Male Performance Capsules?\nDas Nahrungsergรคnzungsmittel Volt Male Performance Capsules bietet eine umfassende Mรถglichkeit, mit verschiedenen sexuellen Problemen bei Mรคnnern umzugehen. Die V-Kapseln sind reich an Aminosรคuren und Pflanzenkonzentraten, um verschiedene Zyklen im Kรถrper wiederzubeleben und zu erweitern, die das sexuelle Wohlbefinden fรถrdern. Einige Volt Male Performance-Kapseln unterstรผtzen die Durchblutung. L-Arginin und L-Citrullin fรถrdern die Bildung von Stickstoffmonoxid, das die Venen lockert und die Blutbildung fรถrdert. Mรคnner benรถtigen eine ausreichende Durchblutung des Penis, um ihre Erektionsfรคhigkeit aufrechtzuerhalten.\n\nAntriebsschwรคche bereitet den meisten Mรคnnern Sorgen. Volt Male Performance-Kapseln kรถnnen Ihren Sexualtrieb durch die Verwendung normaler Gewรผrze wie Ashwagandha und Maca-Wurzelextrakten steigern. Es wurde klinisch nachgewiesen, dass die beiden alten Gewรผrze den Testosteronspiegel erhรถhen und so den gesunden Moxie-Spiegel wiederherstellen. Ein beeintrรคchtigter Widerstand kann Sie davon abhalten, im Bett zu handeln. L-Glutathion und verschiedene Nahrungsergรคnzungsmittel in den Volt Male Performance-Kapseln unterstรผtzen die Krebsprรคvention und verbessern die Spermienqualitรคt und die Erholung der Peniszellen.\n\nSchlechte Schlafqualitรคt und Stress kรถnnen Sie mรผrrisch machen. Unkontrollierte Denkweisen halten Sie davon ab, harte Erektionen zu erreichen. Volt Male Performance Capsules enthรคlt nervenberuhigende Nahrungsergรคnzungsmittel, die die Entspannung und Erholung fรถrdern. Eine bessere Denkweise ermรถglicht es Ihnen, die Erektion, das sexuelle Verlangen und die Ausdauer zu erlangen, die Sie im Bett erwarten. Das Nahrungsergรคnzungsmittel Volt Male Performance Capsules verwendet verschiedene Nahrungsergรคnzungsmittel, um Ihre gesamte sexuelle Gesundheit zu verbessern. Die Verwendung der normalen V-Kapseln kurbelt die Testosteronbildung an, verbessert die Durchblutung, stรคrkt die Penisgesundheit und wirkt sich auf Ihre gesamte sexuelle Gesundheit aus.",
"## Inhaltsstoffe der Volt Male Performance Kapseln\nVolt Male Performance-Kapseln enthalten normale Inhaltsstoffe, die das mรคnnliche Wohlbefinden unterstรผtzen sollen. Die verschiedenen Aminosรคuren und Pflanzenextrakte sind in exakten Dosierungen erhรคltlich und bieten verschiedene medizinische Vorteile. Zu den wichtigsten Befestigungen gehรถren:\n\nL-Arginin: Laut Hersteller des Nahrungsergรคnzungsmittels Volt Male Performance Capsules wird in der Definition normales L-Arginin verwendet, um die Stickoxidverschmelzung zu unterstรผtzen. Die Stickstoff- und Sauerstoffatome sind fรผr die Erweiterung der Venen von entscheidender Bedeutung und beeinflussen somit den Blutfluss. Verschiedene Untersuchungen zeigen, dass die regelmรครige Einnahme von L-Arginin Ihnen dabei helfen kann, auf Wunsch hochwertige Erektionen zu bekommen. Das semi-fundamentale Amino-รtzmittel kann Nebenwirkungen leichter, mittelschwerer Erektionsstรถrungen behandeln, ohne den Kunden Nachwirkungen zu verursachen.\n\nL-Glutathion: Volt Male Performance Capsules enthรคlt 50 mg L-Glutathion pro Tag, um die kรถrpereigenen Krebsprรคventionswerte effektiv zu verbessern. Freie Extremisten kรถnnen รผber Wohlbefinden, Spermienqualitรคt und Testosteronbildung zweimal nachdenken. L-Glutathion wirkt sich auf die Endothelfรคhigkeit aus und kann ED-Probleme bei heranreifenden Mรคnnern lindern.\n\nL-Citrullin: L-Citrullin fรถrdert die Blutentwicklung bei Mรคnnern. Das Aminooxid wird in Arginin umgewandelt, das die Stickoxidbildung ankurbelt. Ideale Stickoxidwerte wirken sich auf die innere Leistungsfรคhigkeit, den Geisteszustand und die Ruhe des Mannes aus. Der Hersteller von Volt Male Performance Capsules bezieht sich auf ein Konzentrat im Diary of Urology, das besagt, dass L-Citrullin bei Mรคnnern mit leichter ED auf die erektile Hรคrte wirken kann.\n\nL-Methionin: L-Methionin ist ein starkes Nahrungsergรคnzungsmittel, das eine gesunde Verdauung fรถrdert und das Energieniveau steigert. Der Hersteller von Volt Male Performance Capsules weist darauf hin, dass Aminosรคuren die Entgiftung unterstรผtzen und einer รstrogenรผberproduktion bei Mรคnnern vorbeugen kรถnnen.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von Volt Performance zu kaufen"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng-3
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4652
- Der: 0.1821
- False Alarm: 0.0597
- Missed Detection: 0.0715
- Confusion: 0.0509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4563 | 1.0 | 181 | 0.4971 | 0.1973 | 0.0553 | 0.0802 | 0.0617 |
| 0.4053 | 2.0 | 362 | 0.4740 | 0.1899 | 0.0604 | 0.0749 | 0.0546 |
| 0.3833 | 3.0 | 543 | 0.4636 | 0.1854 | 0.0556 | 0.0766 | 0.0531 |
| 0.3738 | 4.0 | 724 | 0.4664 | 0.1830 | 0.0579 | 0.0733 | 0.0518 |
| 0.3596 | 5.0 | 905 | 0.4571 | 0.1800 | 0.0558 | 0.0748 | 0.0494 |
| 0.3533 | 6.0 | 1086 | 0.4671 | 0.1844 | 0.0629 | 0.0685 | 0.0529 |
| 0.3571 | 7.0 | 1267 | 0.4641 | 0.1820 | 0.0594 | 0.0711 | 0.0515 |
| 0.3496 | 8.0 | 1448 | 0.4641 | 0.1824 | 0.0596 | 0.0717 | 0.0511 |
| 0.3449 | 9.0 | 1629 | 0.4636 | 0.1819 | 0.0591 | 0.0718 | 0.0510 |
| 0.3415 | 10.0 | 1810 | 0.4652 | 0.1821 | 0.0597 | 0.0715 | 0.0509 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-3", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:45:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng-3
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4652
* Der: 0.1821
* False Alarm: 0.0597
* Missed Detection: 0.0715
* Confusion: 0.0509
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
*There currently is an issue with the **model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end**. Please use with `skip_special_tokens=true`. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="disco_llama.webp" width="400"></p>
# Introduction
**Llama 3 DiscoLM German 8b v0.1 Experimental** is an experimental Llama 3 based version of [DiscoLM German](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1).
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo [here](https://364b61f772fa7baacb.gradio.live/) (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
```
<|im_start|>system
Du bist ein hilfreicher Assistent.<|im_end|>
<|im_start|>user
Wer bist du?<|im_end|>
<|im_start|>assistant
Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
```python
model_id = "DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see [LICENSE](LICENSE) for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project, a collective effort by [JP Harries](https://huggingface.co/jphme), [Bjรถrn Plรผster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani).
Development of Llama 3 DiscoLM German 8b was sponsored by [ellamind](https://ellamind.com).
Compute was sponsored generously by [sysGen GmbH](https://www.sysgen.de/).
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| {"library_name": "transformers", "tags": ["hqq"]} | mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-HQQ | null | [
"transformers",
"llama",
"text-generation",
"hqq",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:45:39+00:00 | [] | [] | TAGS
#transformers #llama #text-generation #hqq #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
*There currently is an issue with the model generating random reserved special tokens (like "<|reserved_special_token_49|>") at the end. Please use with 'skip_special_tokens=true'. We will update once we found the reason for this behaviour. If you found a solution, please let us know!*
# Llama 3 DiscoLM German 8b v0.1 Experimental
<p align="center"><img src="disco_llama.webp" width="400"></p>
# Introduction
Llama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.
This is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.
Please find a online Demo here (we may take this offline for updates).
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This prompt is available as a chat template, which means you can format messages using the
'tokenizer.apply_chat_template()' method:
When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure
that the model continues with an assistant response.
# Example Code for Inference
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# License
This model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.
# Acknowledgements
Built with Meta Llama 3.
DiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.
Development of Llama 3 DiscoLM German 8b was sponsored by ellamind.
Compute was sponsored generously by sysGen GmbH.
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
| [
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] | [
"TAGS\n#transformers #llama #text-generation #hqq #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama 3 DiscoLM German 8b v0.1 Experimental\n\n<p align=\"center\"><img src=\"disco_llama.webp\" width=\"400\"></p>",
"# Introduction\n\nLlama 3 DiscoLM German 8b v0.1 Experimental is an experimental Llama 3 based version of DiscoLM German.\n\nThis is an experimental release and not intended for production use. The model is still in development and will be updated with new features and improvements in the future.\n\nPlease find a online Demo here (we may take this offline for updates).",
"# Prompt Format\n\nDiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.",
"# Example Code for Inference",
"# Limitations & Biases\n\nThis model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.\nThis model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.",
"# License\n\nThis model is distributed under the META LLAMA 3 COMMUNITY LICENSE, see LICENSE for more information.",
"# Acknowledgements\n\nBuilt with Meta Llama 3.\n\nDiscoLM German is a DiscoResearch project, a collective effort by JP Harries, Bjรถrn Plรผster and Daniel Auras.\n\nDevelopment of Llama 3 DiscoLM German 8b was sponsored by ellamind.\nCompute was sponsored generously by sysGen GmbH.\n\n<img src=\"URL alt=\"Built with Axolotl\" width=\"200\" height=\"32\"/>",
"# About DiscoResearch\n\nDiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!",
"# Disclaimer\n\nThe license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place."
] |
null | null | ### rakib72642/HDML_Face_Detection_Model
# HuggingFace: https://huggingface.co/rakib72642/HDML_Face_Detection_Model
# Setup Global API
sudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sud apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=hawkeyes.ngrok.app 8585
# Setup Local API
git clone https://huggingface.co/rakib72642/HDML_Face_Detection_Model && cd HDML_Face_Detection_Model && pip install -r requirements.txt && sudo apt update && sudo apt upgrade -y && python face_main.py
cd HDML_Face_Detection_Model && python face_main.py
# hypercorn face_main:app --bind 127.0.0.1:8585 --workers 4
| {} | rakib72642/HDML_Face_Detection_Model | null | [
"region:us"
] | null | 2024-04-26T11:45:51+00:00 | [] | [] | TAGS
#region-us
| ### rakib72642/HDML_Face_Detection_Model
# HuggingFace: URL
# Setup Global API
sudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sud apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo "deb URL buster main" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8585
# Setup Local API
git clone URL && cd HDML_Face_Detection_Model && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_main.py
cd HDML_Face_Detection_Model && python face_main.py
# hypercorn face_main:app --bind 127.0.0.1:8585 --workers 4
| [
"### rakib72642/HDML_Face_Detection_Model",
"# HuggingFace: URL",
"# Setup Global API\n\nsudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sud apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8585",
"# Setup Local API\ngit clone URL && cd HDML_Face_Detection_Model && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_main.py\n\ncd HDML_Face_Detection_Model && python face_main.py",
"# hypercorn face_main:app --bind 127.0.0.1:8585 --workers 4"
] | [
"TAGS\n#region-us \n",
"### rakib72642/HDML_Face_Detection_Model",
"# HuggingFace: URL",
"# Setup Global API\n\nsudo apt install iproute2 -y && sudo apt install wget -y && sudo apt install unzip -y && sudo apt install unzip -y && apt install nvtop -y && sudo apt-get install git-all -y && sudo apt-get install git-lfs -y && sud apt-get update && sudo apt-get install libgl1 -y && sudo apt install curl -y && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok -y && sudo apt update && sudo apt upgrade -y && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && ngrok http --domain=URL 8585",
"# Setup Local API\ngit clone URL && cd HDML_Face_Detection_Model && pip install -r URL && sudo apt update && sudo apt upgrade -y && python face_main.py\n\ncd HDML_Face_Detection_Model && python face_main.py",
"# hypercorn face_main:app --bind 127.0.0.1:8585 --workers 4"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"} | yiyic/llama3b-text-ent-lora-clf-epoch-1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-04-26T11:46:24+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"} | yiyic/llama3b-text-prop-lora-clf-epoch-1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-04-26T11:46:51+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
text-generation | transformers | # Cecilia
**4B**, SFT...
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
**Chinese, English**
Test 0 of all.
Released as an early preview of our v3 LLMs.
The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products.
The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel" | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "pipeline_tag": "text-generation", "inference": true} | NLPark/Test0_Cecilia | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:47:31+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # Cecilia
4B, SFT...
* microsoft/Phi-3-mini-128k-instruct
Chinese, English
Test 0 of all.
Released as an early preview of our v3 LLMs.
The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products.
The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel" | [
"# Cecilia\n4B, SFT...\n\n* microsoft/Phi-3-mini-128k-instruct\n\nChinese, English\nTest 0 of all.\nReleased as an early preview of our v3 LLMs.\nThe v3 series covers the \"Shi-Ci\", \"AnFeng\" and \"Cecilia\" LLM products.\nThe sizes are labelled from small to large \"Nano\" \"Leap\" \"Pattern\" \"Avocet \"Robin\" \"Kestrel\""
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Cecilia\n4B, SFT...\n\n* microsoft/Phi-3-mini-128k-instruct\n\nChinese, English\nTest 0 of all.\nReleased as an early preview of our v3 LLMs.\nThe v3 series covers the \"Shi-Ci\", \"AnFeng\" and \"Cecilia\" LLM products.\nThe sizes are labelled from small to large \"Nano\" \"Leap\" \"Pattern\" \"Avocet \"Robin\" \"Kestrel\""
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yam-peleg/Hebrew-Mistral-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q6_K.gguf) | Q6_K | 6.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "yam-peleg/Hebrew-Mistral-7B", "quantized_by": "mradermacher"} | mradermacher/Hebrew-Mistral-7B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:yam-peleg/Hebrew-Mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:47:38+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-yam-peleg/Hebrew-Mistral-7B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-yam-peleg/Hebrew-Mistral-7B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | RobertML/sn6c | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:49:16+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng-4
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Der: 0.1806
- False Alarm: 0.0592
- Missed Detection: 0.0714
- Confusion: 0.0501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4104 | 1.0 | 362 | 0.4742 | 0.1920 | 0.0615 | 0.0742 | 0.0562 |
| 0.4041 | 2.0 | 724 | 0.4738 | 0.1868 | 0.0620 | 0.0714 | 0.0534 |
| 0.3741 | 3.0 | 1086 | 0.4695 | 0.1851 | 0.0625 | 0.0705 | 0.0521 |
| 0.3612 | 4.0 | 1448 | 0.4689 | 0.1814 | 0.0588 | 0.0707 | 0.0519 |
| 0.3404 | 5.0 | 1810 | 0.4649 | 0.1792 | 0.0580 | 0.0720 | 0.0492 |
| 0.3462 | 6.0 | 2172 | 0.4620 | 0.1812 | 0.0615 | 0.0692 | 0.0505 |
| 0.3296 | 7.0 | 2534 | 0.4631 | 0.1800 | 0.0582 | 0.0713 | 0.0506 |
| 0.3261 | 8.0 | 2896 | 0.4731 | 0.1820 | 0.0586 | 0.0733 | 0.0501 |
| 0.3251 | 9.0 | 3258 | 0.4663 | 0.1811 | 0.0579 | 0.0727 | 0.0506 |
| 0.3154 | 10.0 | 3620 | 0.4660 | 0.1806 | 0.0592 | 0.0714 | 0.0501 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-4", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:49:43+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng-4
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4660
* Der: 0.1806
* False Alarm: 0.0592
* Missed Detection: 0.0714
* Confusion: 0.0501
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-rw-1b-code-generation-llm-task2
This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3318 | 0.1 | 20 | 1.3407 |
| 1.2643 | 0.2 | 40 | 1.1844 |
| 1.1681 | 0.3 | 60 | 1.1522 |
| 1.0891 | 0.4 | 80 | 1.1209 |
| 1.2164 | 0.5 | 100 | 1.1265 |
| 1.0855 | 0.6 | 120 | 1.1010 |
| 1.1129 | 0.7 | 140 | 1.0897 |
| 1.1169 | 0.8 | 160 | 1.0799 |
| 1.0664 | 0.9 | 180 | 1.0706 |
| 1.1483 | 1.0 | 200 | 1.0756 |
| 0.9707 | 1.1 | 220 | 1.0625 |
| 1.0102 | 1.2 | 240 | 1.0624 |
| 1.0805 | 1.3 | 260 | 1.0615 |
| 0.969 | 1.4 | 280 | 1.0580 |
| 1.118 | 1.5 | 300 | 1.0582 |
| 0.9883 | 1.6 | 320 | 1.0581 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-generation-llm-task2", "results": []}]} | Katochh/falcon-rw-1b-code-generation-llm-task2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:petals-team/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:51:28+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us
| falcon-rw-1b-code-generation-llm-task2
======================================
This model is a fine-tuned version of petals-team/falcon-rw-1b on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0581
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.03
* training\_steps: 320
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 320",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 320",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | Daniel-007/phi-3_qlora_consumer | null | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:51:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/openbmb/Eurux-8x22b-nca
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q2_K.gguf.part2of2) | Q2_K | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF/resolve/main/Eurux-8x22b-nca.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "base_model": "openbmb/Eurux-8x22b-nca", "quantized_by": "mradermacher"} | mradermacher/Eurux-8x22b-nca-GGUF | null | [
"transformers",
"reasoning",
"preference_learning",
"nca",
"en",
"dataset:openbmb/UltraInteract_sft",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"base_model:openbmb/Eurux-8x22b-nca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:52:20+00:00 | [] | [
"en"
] | TAGS
#transformers #reasoning #preference_learning #nca #en #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #base_model-openbmb/Eurux-8x22b-nca #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #reasoning #preference_learning #nca #en #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #base_model-openbmb/Eurux-8x22b-nca #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # Cecilia
**4B**, ORPO...
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
**Chinese, English**
Test 1 of all.
Released as an early preview of our v3 LLMs.
The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products.
The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel" | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "pipeline_tag": "text-generation", "inference": true} | NLPark/Test1_Cecilia | null | [
"transformers",
"pytorch",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:53:16+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #phi3 #text-generation #conversational #custom_code #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # Cecilia
4B, ORPO...
* microsoft/Phi-3-mini-128k-instruct
Chinese, English
Test 1 of all.
Released as an early preview of our v3 LLMs.
The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products.
The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel" | [
"# Cecilia\n4B, ORPO...\n\n* microsoft/Phi-3-mini-128k-instruct\n\nChinese, English\nTest 1 of all.\nReleased as an early preview of our v3 LLMs.\nThe v3 series covers the \"Shi-Ci\", \"AnFeng\" and \"Cecilia\" LLM products.\nThe sizes are labelled from small to large \"Nano\" \"Leap\" \"Pattern\" \"Avocet \"Robin\" \"Kestrel\""
] | [
"TAGS\n#transformers #pytorch #phi3 #text-generation #conversational #custom_code #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Cecilia\n4B, ORPO...\n\n* microsoft/Phi-3-mini-128k-instruct\n\nChinese, English\nTest 1 of all.\nReleased as an early preview of our v3 LLMs.\nThe v3 series covers the \"Shi-Ci\", \"AnFeng\" and \"Cecilia\" LLM products.\nThe sizes are labelled from small to large \"Nano\" \"Leap\" \"Pattern\" \"Avocet \"Robin\" \"Kestrel\""
] |
text-generation | transformers |
# mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit
This model was converted to MLX format from [`DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental`]().
Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"library_name": "transformers", "tags": ["mlx"]} | mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:54:23+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mlx #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit
This model was converted to MLX format from ['DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit\nThis model was converted to MLX format from ['DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mlx #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# mayflowergmbh/Llama3_DiscoLM_German_8b_v0.1_experimental-4bit\nThis model was converted to MLX format from ['DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng-5
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4674
- Der: 0.1833
- False Alarm: 0.0583
- Missed Detection: 0.0725
- Confusion: 0.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4679 | 1.0 | 181 | 0.4997 | 0.2011 | 0.0620 | 0.0789 | 0.0602 |
| 0.4255 | 2.0 | 362 | 0.4820 | 0.1948 | 0.0604 | 0.0770 | 0.0574 |
| 0.4084 | 3.0 | 543 | 0.4808 | 0.1920 | 0.0598 | 0.0769 | 0.0553 |
| 0.4017 | 4.0 | 724 | 0.4787 | 0.1906 | 0.0584 | 0.0760 | 0.0562 |
| 0.3911 | 5.0 | 905 | 0.4716 | 0.1885 | 0.0572 | 0.0762 | 0.0552 |
| 0.3845 | 6.0 | 1086 | 0.4676 | 0.1875 | 0.0618 | 0.0718 | 0.0538 |
| 0.3877 | 7.0 | 1267 | 0.4682 | 0.1877 | 0.0584 | 0.0739 | 0.0555 |
| 0.3828 | 8.0 | 1448 | 0.4681 | 0.1849 | 0.0579 | 0.0740 | 0.0530 |
| 0.3768 | 9.0 | 1629 | 0.4645 | 0.1842 | 0.0581 | 0.0733 | 0.0528 |
| 0.3697 | 10.0 | 1810 | 0.4662 | 0.1838 | 0.0576 | 0.0734 | 0.0529 |
| 0.3731 | 11.0 | 1991 | 0.4697 | 0.1852 | 0.0607 | 0.0715 | 0.0530 |
| 0.3691 | 12.0 | 2172 | 0.4642 | 0.1829 | 0.0572 | 0.0734 | 0.0523 |
| 0.3663 | 13.0 | 2353 | 0.4701 | 0.1854 | 0.0611 | 0.0708 | 0.0535 |
| 0.3641 | 14.0 | 2534 | 0.4678 | 0.1835 | 0.0591 | 0.0714 | 0.0530 |
| 0.3631 | 15.0 | 2715 | 0.4655 | 0.1835 | 0.0583 | 0.0724 | 0.0528 |
| 0.3685 | 16.0 | 2896 | 0.4693 | 0.1838 | 0.0589 | 0.0720 | 0.0529 |
| 0.365 | 17.0 | 3077 | 0.4675 | 0.1836 | 0.0584 | 0.0724 | 0.0528 |
| 0.3618 | 18.0 | 3258 | 0.4675 | 0.1834 | 0.0582 | 0.0726 | 0.0526 |
| 0.3651 | 19.0 | 3439 | 0.4675 | 0.1833 | 0.0582 | 0.0725 | 0.0526 |
| 0.3583 | 20.0 | 3620 | 0.4674 | 0.1833 | 0.0583 | 0.0725 | 0.0526 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-5", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng-5 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:54:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng-5
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4674
* Der: 0.1833
* False Alarm: 0.0583
* Missed Detection: 0.0725
* Confusion: 0.0526
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 20.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 20.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 20.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | # Llama-3-Orca-2.0-8B
<!-- Provide a quick summary of what the model is/does. -->

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing. More data source information to come.
- **Developed by:** Locutusque
- **Model type:** Built with Meta Llama 3
- **Language(s) (NLP):** Many?
- **License:** Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
## Quants
coming soon
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model has great performance in writing and coding.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Conversational AI. | {"license": "other", "library_name": "transformers"} | Locutusque/Llama-3-Orca-2.0-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:55:17+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Llama-3-Orca-2.0-8B
!image/png
## Model Details
### Model Description
I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing. More data source information to come.
- Developed by: Locutusque
- Model type: Built with Meta Llama 3
- Language(s) (NLP): Many?
- License: Llama 3 license URL
## Quants
coming soon
## Uses
This model has great performance in writing and coding.
### Direct Use
Conversational AI. | [
"# Llama-3-Orca-2.0-8B\n\n\n\n\n!image/png",
"## Model Details",
"### Model Description\n\n\n\nI fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing. More data source information to come.\n\n- Developed by: Locutusque\n- Model type: Built with Meta Llama 3\n- Language(s) (NLP): Many?\n- License: Llama 3 license URL",
"## Quants\ncoming soon",
"## Uses\n\n\n\nThis model has great performance in writing and coding.",
"### Direct Use\n\n\n\nConversational AI."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Llama-3-Orca-2.0-8B\n\n\n\n\n!image/png",
"## Model Details",
"### Model Description\n\n\n\nI fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing. More data source information to come.\n\n- Developed by: Locutusque\n- Model type: Built with Meta Llama 3\n- Language(s) (NLP): Many?\n- License: Llama 3 license URL",
"## Quants\ncoming soon",
"## Uses\n\n\n\nThis model has great performance in writing and coding.",
"### Direct Use\n\n\n\nConversational AI."
] |
null | transformers |
# Uploaded model
- **Developed by:** hunterlee27
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | hunterlee27/chinese-llama3-chat | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:56:36+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: hunterlee27
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: hunterlee27\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: hunterlee27\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
summarization | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-multinews
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7276
- Rouge1: 14.7073
- Rouge2: 4.8849
- Rougel: 11.336
- Rougelsum: 13.1015
- Gen Len: 18.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.2539 | 1.0 | 506 | 2.8142 | 14.3316 | 4.7443 | 11.1018 | 12.8337 | 18.98 |
| 3.0164 | 2.0 | 1012 | 2.7613 | 14.749 | 4.9494 | 11.3621 | 13.1838 | 18.98 |
| 2.9764 | 3.0 | 1518 | 2.7402 | 14.7452 | 4.8903 | 11.367 | 13.1816 | 18.98 |
| 2.9514 | 4.0 | 2024 | 2.7307 | 14.7309 | 4.8615 | 11.3391 | 13.1464 | 18.98 |
| 2.9446 | 5.0 | 2530 | 2.7276 | 14.7073 | 4.8849 | 11.336 | 13.1015 | 18.98 |
### Framework versions
- Transformers 4.40.1
- Pytorch 1.13.1+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "pipeline_tag": "summarization", "model-index": [{"name": "t5-small-finetuned-multinews", "results": []}]} | Vexemous/t5-small-finetuned-multinews | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T11:57:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-small-finetuned-multinews
============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7276
* Rouge1: 14.7073
* Rouge2: 4.8849
* Rougel: 11.336
* Rougelsum: 13.1015
* Gen Len: 18.98
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 1.13.1+cu117
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 1.13.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 1.13.1+cu117\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3770
- Wer: 35.1623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3007 | 1.4440 | 1000 | 0.4410 | 41.9825 |
| 0.1741 | 2.8881 | 2000 | 0.3800 | 36.4994 |
| 0.0971 | 4.3321 | 3000 | 0.3751 | 35.3022 |
| 0.079 | 5.7762 | 4000 | 0.3770 | 35.1623 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-base", "model-index": [{"name": "test", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice_11_0", "config": "cs", "split": "None", "args": "cs"}, "metrics": [{"type": "wer", "value": 35.16226470696578, "name": "Wer"}]}]}]} | LadislavVasina1/test-cv11-train-aug-test-aug | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T11:58:19+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
| test
====
This model is a fine-tuned version of openai/whisper-base on the common\_voice\_11\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3770
* Wer: 35.1623
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jun10k/Qwen1.5-7B-MeChat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_XS.gguf) | IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_S.gguf) | Q3_K_S | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_M.gguf) | IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.f16.gguf) | f16 | 15.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["medical"], "base_model": "jun10k/Qwen1.5-7B-MeChat", "quantized_by": "mradermacher"} | mradermacher/Qwen1.5-7B-MeChat-GGUF | null | [
"transformers",
"gguf",
"medical",
"en",
"base_model:jun10k/Qwen1.5-7B-MeChat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:00:27+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #medical #en #base_model-jun10k/Qwen1.5-7B-MeChat #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #medical #en #base_model-jun10k/Qwen1.5-7B-MeChat #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | mlx |
# mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit
This model was converted to MLX format from [`VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct`]().
Refer to the [original model card](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["de", "en"], "license": "other", "tags": ["two stage dpo", "dpo", "mlx"], "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"} | mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit | null | [
"mlx",
"safetensors",
"llama",
"two stage dpo",
"dpo",
"de",
"en",
"license:other",
"region:us"
] | null | 2024-04-26T12:00:51+00:00 | [] | [
"de",
"en"
] | TAGS
#mlx #safetensors #llama #two stage dpo #dpo #de #en #license-other #region-us
|
# mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit
This model was converted to MLX format from ['VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit\nThis model was converted to MLX format from ['VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #llama #two stage dpo #dpo #de #en #license-other #region-us \n",
"# mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-4bit\nThis model was converted to MLX format from ['VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-LoRA-reminder
This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6677 | 1.0 | 22 | 0.6283 | 0.7955 |
| 0.6524 | 2.0 | 44 | 0.6168 | 0.8409 |
| 0.6299 | 3.0 | 66 | 0.6096 | 0.8182 |
| 0.6258 | 4.0 | 88 | 0.5980 | 0.8636 |
| 0.6206 | 5.0 | 110 | 0.5849 | 0.8409 |
| 0.5685 | 6.0 | 132 | 0.5694 | 0.8636 |
| 0.5896 | 7.0 | 154 | 0.5528 | 0.8864 |
| 0.5636 | 8.0 | 176 | 0.5361 | 0.8636 |
| 0.5681 | 9.0 | 198 | 0.5217 | 0.8864 |
| 0.5575 | 10.0 | 220 | 0.4968 | 0.8864 |
| 0.5097 | 11.0 | 242 | 0.4776 | 0.9091 |
| 0.5001 | 12.0 | 264 | 0.4541 | 0.9091 |
| 0.4712 | 13.0 | 286 | 0.4269 | 0.9318 |
| 0.4462 | 14.0 | 308 | 0.4016 | 0.9318 |
| 0.4255 | 15.0 | 330 | 0.3778 | 0.9545 |
| 0.3943 | 16.0 | 352 | 0.3566 | 0.9545 |
| 0.3889 | 17.0 | 374 | 0.3358 | 0.9545 |
| 0.3845 | 18.0 | 396 | 0.3169 | 0.9545 |
| 0.3397 | 19.0 | 418 | 0.2987 | 0.9545 |
| 0.3677 | 20.0 | 440 | 0.2862 | 0.9545 |
| 0.3271 | 21.0 | 462 | 0.2729 | 0.9545 |
| 0.3495 | 22.0 | 484 | 0.2607 | 0.9545 |
| 0.3057 | 23.0 | 506 | 0.2495 | 0.9545 |
| 0.2621 | 24.0 | 528 | 0.2399 | 0.9545 |
| 0.2911 | 25.0 | 550 | 0.2314 | 0.9545 |
| 0.2685 | 26.0 | 572 | 0.2253 | 0.9545 |
| 0.248 | 27.0 | 594 | 0.2200 | 0.9545 |
| 0.2421 | 28.0 | 616 | 0.2164 | 0.9545 |
| 0.2688 | 29.0 | 638 | 0.2147 | 0.9545 |
| 0.2723 | 30.0 | 660 | 0.2139 | 0.9545 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dbmdz/bert-base-italian-uncased", "model-index": [{"name": "bert-LoRA-reminder", "results": []}]} | AlexMason00/bert-LoRA-reminder | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:dbmdz/bert-base-italian-uncased",
"license:mit",
"region:us"
] | null | 2024-04-26T12:01:49+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-dbmdz/bert-base-italian-uncased #license-mit #region-us
| bert-LoRA-reminder
==================
This model is a fine-tuned version of dbmdz/bert-base-italian-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2139
* Accuracy: 0.9545
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-dbmdz/bert-base-italian-uncased #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-ex
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9296
- Recall: 0.9488
- F1: 0.9391
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0775 | 1.0 | 1756 | 0.0731 | 0.8930 | 0.9308 | 0.9115 | 0.9813 |
| 0.0351 | 2.0 | 3512 | 0.0675 | 0.9340 | 0.9456 | 0.9398 | 0.9852 |
| 0.0213 | 3.0 | 5268 | 0.0628 | 0.9296 | 0.9488 | 0.9391 | 0.9864 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cpu
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-ex", "results": []}]} | Lily-Tina/bert-ex | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:02:18+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-ex
=======
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0628
* Precision: 0.9296
* Recall: 0.9488
* F1: 0.9391
* Accuracy: 0.9864
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2+cpu
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cpu\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cpu\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.006 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:03:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/v2ray/SchizoGPT-8x22B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SchizoGPT-8x22B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q2_K.gguf.part2of2) | Q2_K | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/SchizoGPT-8x22B-GGUF/resolve/main/SchizoGPT-8x22B.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["not-for-all-audiences"], "datasets": ["v2ray/r-chatgpt-general-dump"], "base_model": "v2ray/SchizoGPT-8x22B", "quantized_by": "mradermacher"} | mradermacher/SchizoGPT-8x22B-GGUF | null | [
"transformers",
"not-for-all-audiences",
"en",
"dataset:v2ray/r-chatgpt-general-dump",
"base_model:v2ray/SchizoGPT-8x22B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:05:25+00:00 | [] | [
"en"
] | TAGS
#transformers #not-for-all-audiences #en #dataset-v2ray/r-chatgpt-general-dump #base_model-v2ray/SchizoGPT-8x22B #license-mit #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #not-for-all-audiences #en #dataset-v2ray/r-chatgpt-general-dump #base_model-v2ray/SchizoGPT-8x22B #license-mit #endpoints_compatible #region-us \n"
] |
translation | transformers | !pip install sentencepiece transformers==4.33
import torch
from transformers import NllbTokenizer, AutoModelForSeq2SeqLM
def fix_tokenizer(tokenizer, new_lang='fer_Latn'):
""" Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id[new_lang] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = new_lang
# always move "mask" to the last position
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if new_lang not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append(new_lang)
# clear the added token encoder; otherwise a new token may end up there by mistake
tokenizer.added_tokens_encoder = {}
tokenizer.added_tokens_decoder = {}
MODEL_URL = "DinoDelija/nllb_english_fering"
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL)
tokenizer = NllbTokenizer.from_pretrained(MODEL_URL)
fix_tokenizer(tokenizer)
def translate(
text,
model,
tokenizer,
src_lang='eng_Latn',
tgt_lang='fer_Latn',
max_length='auto',
num_beams=4,
n_out=None,
**kwargs
):
tokenizer.src_lang = src_lang
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
if max_length == 'auto':
max_length = int(32 + 2.0 * encoded.input_ids.shape[1])
model.eval()
generated_tokens = model.generate(
**encoded.to(model.device),
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
max_length=max_length,
num_beams=num_beams,
num_return_sequences=n_out or 1,
**kwargs
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
if isinstance(text, str) and n_out is None:
return out[0]
return
translate("ะบัะฐัะฝะฐั ะฟัะธัะฐ", model=model, tokenizer=tokenizer) | {"language": ["en"], "license": "mit", "pipeline_tag": "translation"} | DinoDelija/nllb_english_fering | null | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-26T12:05:56+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #m2m_100 #text2text-generation #translation #en #license-mit #autotrain_compatible #endpoints_compatible #region-us #has_space
| !pip install sentencepiece transformers==4.33
import torch
from transformers import NllbTokenizer, AutoModelForSeq2SeqLM
def fix_tokenizer(tokenizer, new_lang='fer_Latn'):
""" Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id[new_lang] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = new_lang
# always move "mask" to the last position
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if new_lang not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append(new_lang)
# clear the added token encoder; otherwise a new token may end up there by mistake
tokenizer.added_tokens_encoder = {}
tokenizer.added_tokens_decoder = {}
MODEL_URL = "DinoDelija/nllb_english_fering"
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL)
tokenizer = NllbTokenizer.from_pretrained(MODEL_URL)
fix_tokenizer(tokenizer)
def translate(
text,
model,
tokenizer,
src_lang='eng_Latn',
tgt_lang='fer_Latn',
max_length='auto',
num_beams=4,
n_out=None,
kwargs
):
tokenizer.src_lang = src_lang
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
if max_length == 'auto':
max_length = int(32 + 2.0 * encoded.input_ids.shape[1])
URL()
generated_tokens = model.generate(
URL(URL),
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
max_length=max_length,
num_beams=num_beams,
num_return_sequences=n_out or 1,
kwargs
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
if isinstance(text, str) and n_out is None:
return out[0]
return
translate("ะบัะฐัะฝะฐั ะฟัะธัะฐ", model=model, tokenizer=tokenizer) | [
"# always move \"mask\" to the last position\n tokenizer.fairseq_tokens_to_ids[\"<mask>\"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset\n\n tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)\n tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}\n if new_lang not in tokenizer._additional_special_tokens:\n tokenizer._additional_special_tokens.append(new_lang)\n # clear the added token encoder; otherwise a new token may end up there by mistake\n tokenizer.added_tokens_encoder = {}\n tokenizer.added_tokens_decoder = {}\n\nMODEL_URL = \"DinoDelija/nllb_english_fering\"\nmodel = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL)\ntokenizer = NllbTokenizer.from_pretrained(MODEL_URL)\nfix_tokenizer(tokenizer)\n\ndef translate(\n text,\n model,\n tokenizer,\n src_lang='eng_Latn',\n tgt_lang='fer_Latn',\n max_length='auto',\n num_beams=4,\n n_out=None,\n kwargs\n):\n tokenizer.src_lang = src_lang\n encoded = tokenizer(text, return_tensors=\"pt\", truncation=True, max_length=512)\n if max_length == 'auto':\n max_length = int(32 + 2.0 * encoded.input_ids.shape[1])\n URL()\n generated_tokens = model.generate(\n URL(URL),\n forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],\n max_length=max_length,\n num_beams=num_beams,\n num_return_sequences=n_out or 1,\n kwargs\n )\n out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\n if isinstance(text, str) and n_out is None:\n return out[0]\n return \n\ntranslate(\"ะบัะฐัะฝะฐั ะฟัะธัะฐ\", model=model, tokenizer=tokenizer)"
] | [
"TAGS\n#transformers #pytorch #m2m_100 #text2text-generation #translation #en #license-mit #autotrain_compatible #endpoints_compatible #region-us #has_space \n",
"# always move \"mask\" to the last position\n tokenizer.fairseq_tokens_to_ids[\"<mask>\"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset\n\n tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)\n tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}\n if new_lang not in tokenizer._additional_special_tokens:\n tokenizer._additional_special_tokens.append(new_lang)\n # clear the added token encoder; otherwise a new token may end up there by mistake\n tokenizer.added_tokens_encoder = {}\n tokenizer.added_tokens_decoder = {}\n\nMODEL_URL = \"DinoDelija/nllb_english_fering\"\nmodel = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL)\ntokenizer = NllbTokenizer.from_pretrained(MODEL_URL)\nfix_tokenizer(tokenizer)\n\ndef translate(\n text,\n model,\n tokenizer,\n src_lang='eng_Latn',\n tgt_lang='fer_Latn',\n max_length='auto',\n num_beams=4,\n n_out=None,\n kwargs\n):\n tokenizer.src_lang = src_lang\n encoded = tokenizer(text, return_tensors=\"pt\", truncation=True, max_length=512)\n if max_length == 'auto':\n max_length = int(32 + 2.0 * encoded.input_ids.shape[1])\n URL()\n generated_tokens = model.generate(\n URL(URL),\n forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],\n max_length=max_length,\n num_beams=num_beams,\n num_return_sequences=n_out or 1,\n kwargs\n )\n out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\n if isinstance(text, str) and n_out is None:\n return out[0]\n return \n\ntranslate(\"ะบัะฐัะฝะฐั ะฟัะธัะฐ\", model=model, tokenizer=tokenizer)"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBertLoRa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the IMDB Movie dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0234
- Accuracy: {'accuracy': 0.884}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.4076 | {'accuracy': 0.876} |
| 0.429 | 2.0 | 500 | 0.3507 | {'accuracy': 0.863} |
| 0.429 | 3.0 | 750 | 0.5018 | {'accuracy': 0.881} |
| 0.2304 | 4.0 | 1000 | 0.7036 | {'accuracy': 0.864} |
| 0.2304 | 5.0 | 1250 | 0.8113 | {'accuracy': 0.862} |
| 0.1136 | 6.0 | 1500 | 0.8428 | {'accuracy': 0.882} |
| 0.1136 | 7.0 | 1750 | 0.9134 | {'accuracy': 0.89} |
| 0.0153 | 8.0 | 2000 | 0.9723 | {'accuracy': 0.884} |
| 0.0153 | 9.0 | 2250 | 1.0225 | {'accuracy': 0.884} |
| 0.0089 | 10.0 | 2500 | 1.0234 | {'accuracy': 0.884} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "DistilBertLoRa", "results": []}]} | Abdo36/DistilBertLoRa | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T12:06:51+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #region-us
| DistilBertLoRa
==============
This model is a fine-tuned version of distilbert-base-uncased on the IMDB Movie dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0234
* Accuracy: {'accuracy': 0.884}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-eng-6
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5147
- Der: 0.1839
- False Alarm: 0.0668
- Missed Detection: 0.0694
- Confusion: 0.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4083 | 1.0 | 362 | 0.4880 | 0.1967 | 0.0505 | 0.0840 | 0.0621 |
| 0.3919 | 2.0 | 724 | 0.4688 | 0.1852 | 0.0608 | 0.0717 | 0.0527 |
| 0.3708 | 3.0 | 1086 | 0.4637 | 0.1846 | 0.0581 | 0.0738 | 0.0527 |
| 0.3549 | 4.0 | 1448 | 0.4636 | 0.1809 | 0.0585 | 0.0689 | 0.0535 |
| 0.3299 | 5.0 | 1810 | 0.4727 | 0.1835 | 0.0587 | 0.0699 | 0.0549 |
| 0.3457 | 6.0 | 2172 | 0.4727 | 0.1861 | 0.0654 | 0.0672 | 0.0535 |
| 0.3241 | 7.0 | 2534 | 0.4921 | 0.1835 | 0.0621 | 0.0701 | 0.0513 |
| 0.3116 | 8.0 | 2896 | 0.4859 | 0.1839 | 0.0647 | 0.0677 | 0.0515 |
| 0.304 | 9.0 | 3258 | 0.4639 | 0.1788 | 0.0571 | 0.0718 | 0.0499 |
| 0.2896 | 10.0 | 3620 | 0.4844 | 0.1826 | 0.0659 | 0.0676 | 0.0490 |
| 0.2853 | 11.0 | 3982 | 0.4696 | 0.1787 | 0.0521 | 0.0777 | 0.0489 |
| 0.2831 | 12.0 | 4344 | 0.4858 | 0.1831 | 0.0662 | 0.0684 | 0.0484 |
| 0.2746 | 13.0 | 4706 | 0.4799 | 0.1828 | 0.0639 | 0.0703 | 0.0486 |
| 0.2685 | 14.0 | 5068 | 0.4951 | 0.1847 | 0.0658 | 0.0695 | 0.0494 |
| 0.2627 | 15.0 | 5430 | 0.5042 | 0.1829 | 0.0627 | 0.0713 | 0.0489 |
| 0.2551 | 16.0 | 5792 | 0.5066 | 0.1839 | 0.0671 | 0.0682 | 0.0486 |
| 0.2509 | 17.0 | 6154 | 0.5126 | 0.1854 | 0.0690 | 0.0695 | 0.0469 |
| 0.2502 | 18.0 | 6516 | 0.5196 | 0.1861 | 0.0676 | 0.0695 | 0.0490 |
| 0.247 | 19.0 | 6878 | 0.5187 | 0.1844 | 0.0670 | 0.0698 | 0.0476 |
| 0.2417 | 20.0 | 7240 | 0.5147 | 0.1839 | 0.0668 | 0.0694 | 0.0477 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-eng-6", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-callhome-eng-6 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:07:22+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-eng-6
==============================================
This model is a fine-tuned version of pyannote/segmentation-3.0 on the diarizers-community/callhome eng dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5147
* Der: 0.1839
* False Alarm: 0.0668
* Missed Detection: 0.0694
* Confusion: 0.0477
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.002
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0+cu121
* Datasets 2.17.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #dataset-diarizers-community/callhome #base_model-pyannote/segmentation-3.0 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "259.45 +/- 15.43", "name": "mean_reward", "verified": false}]}]}]} | Jurij1/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-26T12:08:36+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-finetuned-cityscapes-1024-1024-straighter-only
This model is a fine-tuned version of [nvidia/segformer-b1-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b1-finetuned-cityscapes-1024-1024) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0331
- Mean Iou: 0.9378
- Mean Accuracy: 0.9644
- Overall Accuracy: 0.9883
- Accuracy Default: 1e-06
- Accuracy Pipe: 0.9182
- Accuracy Floor: 0.9790
- Accuracy Background: 0.9961
- Iou Default: 1e-06
- Iou Pipe: 0.8600
- Iou Floor: 0.9637
- Iou Background: 0.9896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Default | Accuracy Pipe | Accuracy Floor | Accuracy Background | Iou Default | Iou Pipe | Iou Floor | Iou Background |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:-------------:|:--------------:|:-------------------:|:-----------:|:--------:|:---------:|:--------------:|
| 0.5349 | 1.0 | 36 | 0.1593 | 0.8143 | 0.8613 | 0.9661 | 1e-06 | 0.6324 | 0.9614 | 0.9903 | 1e-06 | 0.5490 | 0.9277 | 0.9660 |
| 0.1472 | 2.0 | 72 | 0.0977 | 0.8792 | 0.9255 | 0.9782 | 1e-06 | 0.8153 | 0.9690 | 0.9922 | 1e-06 | 0.7119 | 0.9456 | 0.9800 |
| 0.0902 | 3.0 | 108 | 0.0708 | 0.9014 | 0.9285 | 0.9820 | 1e-06 | 0.8194 | 0.9690 | 0.9972 | 1e-06 | 0.7669 | 0.9558 | 0.9815 |
| 0.0662 | 4.0 | 144 | 0.0586 | 0.9146 | 0.9552 | 0.9842 | 1e-06 | 0.9036 | 0.9666 | 0.9954 | 1e-06 | 0.8015 | 0.9567 | 0.9856 |
| 0.0543 | 5.0 | 180 | 0.0490 | 0.9225 | 0.9514 | 0.9856 | 1e-06 | 0.8844 | 0.9734 | 0.9964 | 1e-06 | 0.8208 | 0.9606 | 0.9860 |
| 0.0486 | 6.0 | 216 | 0.0445 | 0.9252 | 0.9640 | 0.9862 | 1e-06 | 0.9244 | 0.9729 | 0.9947 | 1e-06 | 0.8265 | 0.9616 | 0.9875 |
| 0.042 | 7.0 | 252 | 0.0414 | 0.9279 | 0.9658 | 0.9867 | 1e-06 | 0.9315 | 0.9699 | 0.9959 | 1e-06 | 0.8332 | 0.9626 | 0.9880 |
| 0.0389 | 8.0 | 288 | 0.0381 | 0.9322 | 0.9695 | 0.9874 | 1e-06 | 0.9413 | 0.9716 | 0.9956 | 1e-06 | 0.8448 | 0.9632 | 0.9888 |
| 0.0359 | 9.0 | 324 | 0.0386 | 0.9319 | 0.9629 | 0.9871 | 1e-06 | 0.9215 | 0.9702 | 0.9970 | 1e-06 | 0.8451 | 0.9630 | 0.9877 |
| 0.034 | 10.0 | 360 | 0.0374 | 0.9313 | 0.9632 | 0.9873 | 1e-06 | 0.9202 | 0.9730 | 0.9965 | 1e-06 | 0.8422 | 0.9634 | 0.9883 |
| 0.0322 | 11.0 | 396 | 0.0383 | 0.9300 | 0.9570 | 0.9871 | 1e-06 | 0.8993 | 0.9746 | 0.9971 | 1e-06 | 0.8379 | 0.9642 | 0.9878 |
| 0.0306 | 12.0 | 432 | 0.0353 | 0.9340 | 0.9678 | 0.9876 | 1e-06 | 0.9358 | 0.9710 | 0.9965 | 1e-06 | 0.8494 | 0.9637 | 0.9888 |
| 0.0292 | 13.0 | 468 | 0.0337 | 0.9355 | 0.9734 | 0.9881 | 1e-06 | 0.9527 | 0.9719 | 0.9957 | 1e-06 | 0.8529 | 0.9637 | 0.9898 |
| 0.0286 | 14.0 | 504 | 0.0334 | 0.9355 | 0.9686 | 0.9881 | 1e-06 | 0.9352 | 0.9745 | 0.9960 | 1e-06 | 0.8530 | 0.9641 | 0.9895 |
| 0.0271 | 15.0 | 540 | 0.0325 | 0.9389 | 0.9682 | 0.9885 | 1e-06 | 0.9325 | 0.9758 | 0.9964 | 1e-06 | 0.8624 | 0.9648 | 0.9897 |
| 0.0266 | 16.0 | 576 | 0.0327 | 0.9373 | 0.9696 | 0.9883 | 1e-06 | 0.9378 | 0.9748 | 0.9961 | 1e-06 | 0.8576 | 0.9646 | 0.9897 |
| 0.0257 | 17.0 | 612 | 0.0350 | 0.9330 | 0.9673 | 0.9877 | 1e-06 | 0.9302 | 0.9766 | 0.9952 | 1e-06 | 0.8463 | 0.9636 | 0.9892 |
| 0.0246 | 18.0 | 648 | 0.0333 | 0.9354 | 0.9665 | 0.9881 | 1e-06 | 0.9269 | 0.9764 | 0.9961 | 1e-06 | 0.8522 | 0.9644 | 0.9896 |
| 0.0242 | 19.0 | 684 | 0.0326 | 0.9378 | 0.9681 | 0.9884 | 1e-06 | 0.9311 | 0.9772 | 0.9959 | 1e-06 | 0.8588 | 0.9648 | 0.9896 |
| 0.0231 | 20.0 | 720 | 0.0339 | 0.9366 | 0.9665 | 0.9883 | 1e-06 | 0.9256 | 0.9781 | 0.9958 | 1e-06 | 0.8557 | 0.9646 | 0.9896 |
| 0.0236 | 21.0 | 756 | 0.0333 | 0.9365 | 0.9702 | 0.9883 | 1e-06 | 0.9375 | 0.9779 | 0.9951 | 1e-06 | 0.8552 | 0.9644 | 0.9900 |
| 0.0227 | 22.0 | 792 | 0.0327 | 0.9375 | 0.9690 | 0.9885 | 1e-06 | 0.9339 | 0.9773 | 0.9958 | 1e-06 | 0.8577 | 0.9649 | 0.9900 |
| 0.0226 | 23.0 | 828 | 0.0331 | 0.9378 | 0.9644 | 0.9883 | 1e-06 | 0.9182 | 0.9790 | 0.9961 | 1e-06 | 0.8600 | 0.9637 | 0.9896 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
| {"license": "other", "tags": ["generated_from_trainer"], "base_model": "nvidia/segformer-b1-finetuned-cityscapes-1024-1024", "model-index": [{"name": "segformer-b1-finetuned-cityscapes-1024-1024-straighter-only", "results": []}]} | selvaa/segformer-b1-finetuned-cityscapes-1024-1024-straighter-only | null | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:10:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/segformer-b1-finetuned-cityscapes-1024-1024 #license-other #endpoints_compatible #region-us
| segformer-b1-finetuned-cityscapes-1024-1024-straighter-only
===========================================================
This model is a fine-tuned version of nvidia/segformer-b1-finetuned-cityscapes-1024-1024 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0331
* Mean Iou: 0.9378
* Mean Accuracy: 0.9644
* Overall Accuracy: 0.9883
* Accuracy Default: 1e-06
* Accuracy Pipe: 0.9182
* Accuracy Floor: 0.9790
* Accuracy Background: 0.9961
* Iou Default: 1e-06
* Iou Pipe: 0.8600
* Iou Floor: 0.9637
* Iou Background: 0.9896
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 3
* eval\_batch\_size: 3
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 60
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.0.1
* Datasets 2.15.0
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.0.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/segformer-b1-finetuned-cityscapes-1024-1024 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.0.1\n* Datasets 2.15.0\n* Tokenizers 0.15.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-dolly-qa
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1480
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.9203 | 1.6393 | 100 | 2.5631 |
| 2.4253 | 3.2787 | 200 | 2.2695 |
| 2.2443 | 4.9180 | 300 | 2.1581 |
| 2.1512 | 6.5574 | 400 | 2.1002 |
| 2.1033 | 8.1967 | 500 | 2.0723 |
| 2.0876 | 9.8361 | 600 | 2.0565 |
| 2.0668 | 11.4754 | 700 | 2.0460 |
| 2.0478 | 13.1148 | 800 | 2.0387 |
| 2.0403 | 14.7541 | 900 | 2.0328 |
| 2.0366 | 16.3934 | 1000 | 2.0286 |
| 2.0238 | 18.0328 | 1100 | 2.0255 |
| 2.0231 | 19.6721 | 1200 | 2.0233 |
| 2.0126 | 21.3115 | 1300 | 2.0220 |
| 2.0164 | 22.9508 | 1400 | 2.0211 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.0.post0+cxx11.abi
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-2b-dolly-qa", "results": []}]} | snarktank/gemma-2b-dolly-qa | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-04-26T12:10:44+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
| gemma-2b-dolly-qa
=================
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0211
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.05
* training\_steps: 1480
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.1.0.post0+URL
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0.post0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 1480",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0.post0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"} | yiyic/llama3b-text-ent-lora-clf-epoch-2 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-04-26T12:15:36+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-hf"} | cgihlstorf/NEW_finetuned_llama27b32_1_0.0003_alternate_RANDOM_25_pct | null | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-26T12:15:55+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"} | yiyic/llama3b-text-prop-lora-clf-epoch-2 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-04-26T12:16:21+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-openlm-research/open_llama_3b_v2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Siren1000-Chatbot-Phi2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "Siren1000-Chatbot-Phi2", "results": []}]} | RayBoustany/Siren1000-Chatbot-Phi2 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-26T12:20:10+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
|
# Siren1000-Chatbot-Phi2
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# Siren1000-Chatbot-Phi2\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# Siren1000-Chatbot-Phi2\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0718
- Accuracy: 0.9744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2579 | 1.0 | 190 | 0.1356 | 0.9519 |
| 0.1805 | 2.0 | 380 | 0.0895 | 0.97 |
| 0.1528 | 3.0 | 570 | 0.0718 | 0.9744 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-eurosat", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9744444444444444, "name": "Accuracy"}]}]}]} | rhlc/swin-tiny-patch4-window7-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:22:23+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| swin-tiny-patch4-window7-224-finetuned-eurosat
==============================================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0718
* Accuracy: 0.9744
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8684
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5163 | 1.0 | 535 | 0.4563 | 0.4507 |
| 0.342 | 2.0 | 1070 | 0.4735 | 0.5230 |
| 0.2315 | 3.0 | 1605 | 0.6357 | 0.5262 |
| 0.1717 | 4.0 | 2140 | 0.8156 | 0.5189 |
| 0.1259 | 5.0 | 2675 | 0.8684 | 0.5356 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": []}]} | kkater/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:22:39+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8684
* Matthews Correlation: 0.5356
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | hossniper/Reinforce-CartPole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-26T12:23:54+00:00 | [] | [] | TAGS
#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing CartPole-v1
This is a trained model of a Reinforce agent playing CartPole-v1 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-bert-finetuned-squad
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilled-bert-finetuned-squad", "results": []}]} | momo345/distilled-bert-finetuned-squad | null | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:25:06+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
|
# distilled-bert-finetuned-squad
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# distilled-bert-finetuned-squad\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilled-bert-finetuned-squad\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | vaatsav06/Llama3_medqa | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-26T12:26:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | RayBoustany/Siren1000-Chatbot-Phi2-Merged | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T12:27:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** Digeriuz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | Digeriuz/mistral-7b-bnb-4bit-annomi | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:29:32+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Digeriuz
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Digeriuz\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Digeriuz\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/final2 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:30:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** ruslandev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This model is finetuned on the data of [Samantha](https://erichartford.com/meet-samantha).
Prompt format is Alpaca. I used the same system prompt as the original Samantha.
```
"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{SYSTEM_PROMPT}
### Input:
{QUESTION}
### Response:
"""
```
# Training
[gptchain](https://github.com/RuslanPeresy/gptchain) framework has been used for training.
## Training hyperparameters
- learning_rate: 2e-4
- seed: 3407
- gradient_accumulation_steps: 4
- per_device_train_batch_size: 2
- optimizer: adamw_8bit
- lr_scheduler_type: linear
- warmup_steps: 5
- num_epochs: 2
- weight_decay: 0.01
## Training results
|Training Loss | Epoch | Step |
|--------------|-------|------|
|2.0778 |0.0 |1 |
|0.6255 |0.18 |120 |
|0.6208 |0.94 |620 |
|0.6244 |2.0 |1306 |
2 epoch finetuning from llama-3-8b took 1 hour on a single A100 with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "datasets": ["cognitivecomputations/samantha-data"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | ruslandev/llama-3-8b-samantha | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"dataset:cognitivecomputations/samantha-data",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:31:12+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #dataset-cognitivecomputations/samantha-data #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Uploaded model
==============
* Developed by: ruslandev
* License: apache-2.0
* Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This model is finetuned on the data of Samantha.
Prompt format is Alpaca. I used the same system prompt as the original Samantha.
Training
========
gptchain framework has been used for training.
Training hyperparameters
------------------------
* learning\_rate: 2e-4
* seed: 3407
* gradient\_accumulation\_steps: 4
* per\_device\_train\_batch\_size: 2
* optimizer: adamw\_8bit
* lr\_scheduler\_type: linear
* warmup\_steps: 5
* num\_epochs: 2
* weight\_decay: 0.01
Training results
----------------
Training Loss: 2.0778, Epoch: 0.0, Step: 1
Training Loss: 0.6255, Epoch: 0.18, Step: 120
Training Loss: 0.6208, Epoch: 0.94, Step: 620
Training Loss: 0.6244, Epoch: 2.0, Step: 1306
2 epoch finetuning from llama-3-8b took 1 hour on a single A100 with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #dataset-cognitivecomputations/samantha-data #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# **Introduction**
The model was trained to translate a single sentence from English to Korean with a 1.18M dataset in the general domain.
Dataset: [nayohan/aihub-en-ko-translation-1.2m](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-1.2m)
### **Loading the Model**
Use the following Python code to load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "nayohan/llama3-8b-it-translation-general-en-ko-1sent"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
```
### **Generating Text**
To generate text, use the following Python code: Currently, this model only support English to Korean, not other languages or reverse and styles.
```python
style="written"
SYSTEM_PROMPT=f"Acts as a translator. Translate en sentences into ko sentences in {style} style."
s = "The aerospace industry is a flower in the field of technology and science."
conversation = [{'role': 'system', 'content': SYSTEM_PROMPT},
{'role': 'user', 'content': s}]
inputs = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors='pt'
).to("cuda")
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0][len(inputs[0]):]))
```
```
# Result
# INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nActs as a translator. Translate en sentences into ko sentences in colloquial style.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThe aerospace industry is a flower in the field of technology and science.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n
# OUTPUT: ํญ๊ณต ์ฐ์ฃผ ์ฐ์
์ ๊ธฐ์ ๊ณผ ๊ณผํ์ ๊ฝ์
๋๋ค.<|eot_id|>
# INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nActs as a translator. Translate en sentences into ko sentences in colloquial style.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n
Technical and basic sciences are very important in terms of research. It has a significant impact on the industrial development of a country. Government policies control the research budget.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n
# OUTPUT: ๊ธฐ์ ๊ณผ ๊ธฐ์ด๊ณผํ์ ์ฐ๊ตฌ ์ธก๋ฉด์์ ๋งค์ฐ ์ค์ํฉ๋๋ค. ํ ๊ตญ๊ฐ์ ์ฐ์
๋ฐ์ ์ ํฐ ์ํฅ์ ๋ฏธ์นฉ๋๋ค. ์ ๋ถ ์ ์ฑ
์ ์ฐ๊ตฌ ์์ฐ์ ํต์ ํฉ๋๋ค.<|eot_id|>
```
### **Citation**
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
Our trainig code can be found here: [TBD] | {"language": ["en", "ko"], "license": "llama3", "library_name": "transformers", "tags": ["translation", "enko", "ko"], "datasets": ["nayohan/aihub-en-ko-translation-1.2m"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct"], "pipeline_tag": "text-generation"} | nayohan/llama3-8b-it-translation-general-en-ko-1sent | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"enko",
"ko",
"conversational",
"en",
"dataset:nayohan/aihub-en-ko-translation-1.2m",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T12:32:33+00:00 | [] | [
"en",
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #translation #enko #ko #conversational #en #dataset-nayohan/aihub-en-ko-translation-1.2m #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Introduction
The model was trained to translate a single sentence from English to Korean with a 1.18M dataset in the general domain.
Dataset: nayohan/aihub-en-ko-translation-1.2m
### Loading the Model
Use the following Python code to load the model:
### Generating Text
To generate text, use the following Python code: Currently, this model only support English to Korean, not other languages or reverse and styles.
### Citation
Our trainig code can be found here: [TBD] | [
"# Introduction\nThe model was trained to translate a single sentence from English to Korean with a 1.18M dataset in the general domain.\nDataset: nayohan/aihub-en-ko-translation-1.2m",
"### Loading the Model\n\nUse the following Python code to load the model:",
"### Generating Text\nTo generate text, use the following Python code: Currently, this model only support English to Korean, not other languages or reverse and styles.",
"### Citation\n\nOur trainig code can be found here: [TBD]"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #translation #enko #ko #conversational #en #dataset-nayohan/aihub-en-ko-translation-1.2m #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Introduction\nThe model was trained to translate a single sentence from English to Korean with a 1.18M dataset in the general domain.\nDataset: nayohan/aihub-en-ko-translation-1.2m",
"### Loading the Model\n\nUse the following Python code to load the model:",
"### Generating Text\nTo generate text, use the following Python code: Currently, this model only support English to Korean, not other languages or reverse and styles.",
"### Citation\n\nOur trainig code can be found here: [TBD]"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# virus_pythia_14_1024_compliment
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "virus_pythia_14_1024_compliment", "results": []}]} | Hack90/virus_pythia_14_1024_compliment | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T12:35:40+00:00 | [] | [] | TAGS
#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# virus_pythia_14_1024_compliment
This model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# virus_pythia_14_1024_compliment\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 40\n- eval_batch_size: 40\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #generated_from_trainer #base_model-EleutherAI/pythia-14m #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# virus_pythia_14_1024_compliment\n\nThis model is a fine-tuned version of EleutherAI/pythia-14m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 40\n- eval_batch_size: 40\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm_output
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "llm_output", "results": []}]} | RohithMidigudla/llm_output | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T12:42:39+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
|
# llm_output
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | [
"# llm_output\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.36.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.16.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"# llm_output\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.36.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.16.0\n- Tokenizers 0.15.2"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hc-impaired-all-v3
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the honzapucalek/hc_impaired_all_v3 cs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3837
- Wer: 0.1107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0275 | 6.87 | 1000 | 0.2212 | 0.1163 |
| 0.0021 | 13.75 | 2000 | 0.3051 | 0.1123 |
| 0.0004 | 20.62 | 3000 | 0.3517 | 0.1113 |
| 0.0001 | 27.49 | 4000 | 0.3760 | 0.1104 |
| 0.0001 | 34.36 | 5000 | 0.3837 | 0.1107 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["honzapucalek/hc_impaired_all_v3"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "hc-impaired-all-v3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "honzapucalek/hc_impaired_all_v3 cs", "type": "honzapucalek/hc_impaired_all_v3", "config": "cs", "split": "test", "args": "cs"}, "metrics": [{"type": "wer", "value": 0.11072981011210249, "name": "Wer"}]}]}]} | honzapucalek/hc-impaired-all-v3 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:honzapucalek/hc_impaired_all_v3",
"base_model:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:42:57+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-honzapucalek/hc_impaired_all_v3 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| hc-impaired-all-v3
==================
This model is a fine-tuned version of openai/whisper-large-v3 on the honzapucalek/hc\_impaired\_all\_v3 cs dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3837
* Wer: 0.1107
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-honzapucalek/hc_impaired_all_v3 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.