pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
reinforcement-learning | stable-baselines3 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kommunarus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kommunarus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Kommunarus
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 5000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| {"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "571.50 +/- 63.99", "name": "mean_reward", "verified": false}]}]}]} | Kommunarus/dqn-SpaceInvadersNoFrameskip-v4 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| null | 2024-04-16T11:35:57+00:00 | []
| []
| TAGS
#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing SpaceInvadersNoFrameskip-v4
This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
| [
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
]
| [
"TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.01-len_3-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.01-len_3-filtered-negative", "results": []}]} | Shalazary/ruBert-base-sberquad-0.01-len_3-filtered-negative | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:37:15+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.01-len_3-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.01-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.01-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | Narednra/MultiQandAtinylalma2 | null | [
"peft",
"safetensors",
"llama",
"region:us"
]
| null | 2024-04-16T11:38:08+00:00 | []
| []
| TAGS
#peft #safetensors #llama #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
]
| [
"TAGS\n#peft #safetensors #llama #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4774
- F1 Score: 0.8052
- Accuracy: 0.8064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5566 | 7.41 | 200 | 0.4796 | 0.7589 | 0.7606 |
| 0.4647 | 14.81 | 400 | 0.4520 | 0.7798 | 0.7800 |
| 0.4302 | 22.22 | 600 | 0.4387 | 0.7869 | 0.7874 |
| 0.4029 | 29.63 | 800 | 0.4311 | 0.7919 | 0.7920 |
| 0.3792 | 37.04 | 1000 | 0.4274 | 0.8013 | 0.8015 |
| 0.3605 | 44.44 | 1200 | 0.4283 | 0.8062 | 0.8062 |
| 0.3461 | 51.85 | 1400 | 0.4235 | 0.8042 | 0.8049 |
| 0.3329 | 59.26 | 1600 | 0.4188 | 0.8090 | 0.8090 |
| 0.3234 | 66.67 | 1800 | 0.4212 | 0.8123 | 0.8128 |
| 0.3141 | 74.07 | 2000 | 0.4301 | 0.8118 | 0.8120 |
| 0.3066 | 81.48 | 2200 | 0.4362 | 0.8099 | 0.8107 |
| 0.2988 | 88.89 | 2400 | 0.4351 | 0.8134 | 0.8135 |
| 0.2918 | 96.3 | 2600 | 0.4381 | 0.8125 | 0.8129 |
| 0.2862 | 103.7 | 2800 | 0.4402 | 0.8136 | 0.8139 |
| 0.2802 | 111.11 | 3000 | 0.4382 | 0.8142 | 0.8142 |
| 0.275 | 118.52 | 3200 | 0.4453 | 0.8132 | 0.8141 |
| 0.2686 | 125.93 | 3400 | 0.4603 | 0.8144 | 0.8147 |
| 0.2652 | 133.33 | 3600 | 0.4655 | 0.8153 | 0.8156 |
| 0.26 | 140.74 | 3800 | 0.4591 | 0.8133 | 0.8138 |
| 0.2539 | 148.15 | 4000 | 0.4750 | 0.8164 | 0.8173 |
| 0.2495 | 155.56 | 4200 | 0.5025 | 0.8120 | 0.8130 |
| 0.2457 | 162.96 | 4400 | 0.4712 | 0.8126 | 0.8128 |
| 0.2402 | 170.37 | 4600 | 0.4817 | 0.8112 | 0.8117 |
| 0.2374 | 177.78 | 4800 | 0.4858 | 0.8114 | 0.8117 |
| 0.2315 | 185.19 | 5000 | 0.4907 | 0.8140 | 0.8145 |
| 0.2286 | 192.59 | 5200 | 0.4988 | 0.8115 | 0.8126 |
| 0.2253 | 200.0 | 5400 | 0.4962 | 0.8143 | 0.8147 |
| 0.2208 | 207.41 | 5600 | 0.5171 | 0.8133 | 0.8141 |
| 0.216 | 214.81 | 5800 | 0.5026 | 0.8127 | 0.8132 |
| 0.2124 | 222.22 | 6000 | 0.5138 | 0.8139 | 0.8145 |
| 0.2104 | 229.63 | 6200 | 0.5077 | 0.8105 | 0.8111 |
| 0.2066 | 237.04 | 6400 | 0.5205 | 0.8104 | 0.8110 |
| 0.2035 | 244.44 | 6600 | 0.5098 | 0.8105 | 0.8108 |
| 0.2014 | 251.85 | 6800 | 0.5305 | 0.8133 | 0.8139 |
| 0.1978 | 259.26 | 7000 | 0.5361 | 0.8105 | 0.8113 |
| 0.1961 | 266.67 | 7200 | 0.5381 | 0.8107 | 0.8116 |
| 0.1935 | 274.07 | 7400 | 0.5350 | 0.8127 | 0.8133 |
| 0.1911 | 281.48 | 7600 | 0.5483 | 0.8124 | 0.8130 |
| 0.188 | 288.89 | 7800 | 0.5354 | 0.8113 | 0.8119 |
| 0.1885 | 296.3 | 8000 | 0.5357 | 0.8108 | 0.8114 |
| 0.1861 | 303.7 | 8200 | 0.5574 | 0.8124 | 0.8130 |
| 0.1844 | 311.11 | 8400 | 0.5544 | 0.8131 | 0.8138 |
| 0.1828 | 318.52 | 8600 | 0.5588 | 0.8123 | 0.8129 |
| 0.1815 | 325.93 | 8800 | 0.5412 | 0.8121 | 0.8126 |
| 0.1793 | 333.33 | 9000 | 0.5511 | 0.8091 | 0.8096 |
| 0.1782 | 340.74 | 9200 | 0.5535 | 0.8139 | 0.8144 |
| 0.1774 | 348.15 | 9400 | 0.5552 | 0.8118 | 0.8125 |
| 0.1772 | 355.56 | 9600 | 0.5534 | 0.8117 | 0.8123 |
| 0.1773 | 362.96 | 9800 | 0.5526 | 0.8121 | 0.8126 |
| 0.177 | 370.37 | 10000 | 0.5557 | 0.8113 | 0.8119 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_mouse_1-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T11:39:24+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_mouse\_1-seqsight\_16384\_512\_34M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4774
* F1 Score: 0.8052
* Accuracy: 0.8064
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2933
- F1 Score: 0.5728
- Accuracy: 0.5730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6666 | 25.0 | 200 | 0.7157 | 0.5517 | 0.5656 |
| 0.5598 | 50.0 | 400 | 0.8011 | 0.5647 | 0.5645 |
| 0.4709 | 75.0 | 600 | 0.8964 | 0.5679 | 0.5682 |
| 0.4092 | 100.0 | 800 | 0.9655 | 0.5725 | 0.5725 |
| 0.3645 | 125.0 | 1000 | 1.0371 | 0.5802 | 0.5815 |
| 0.332 | 150.0 | 1200 | 1.0881 | 0.5742 | 0.5741 |
| 0.307 | 175.0 | 1400 | 1.1743 | 0.5810 | 0.5810 |
| 0.2839 | 200.0 | 1600 | 1.1553 | 0.5763 | 0.5762 |
| 0.2715 | 225.0 | 1800 | 1.2041 | 0.5684 | 0.5693 |
| 0.2561 | 250.0 | 2000 | 1.2428 | 0.5721 | 0.5720 |
| 0.2441 | 275.0 | 2200 | 1.2144 | 0.5784 | 0.5783 |
| 0.2311 | 300.0 | 2400 | 1.3194 | 0.5847 | 0.5847 |
| 0.2218 | 325.0 | 2600 | 1.2984 | 0.5761 | 0.5762 |
| 0.212 | 350.0 | 2800 | 1.3210 | 0.5776 | 0.5778 |
| 0.2005 | 375.0 | 3000 | 1.3167 | 0.5795 | 0.5794 |
| 0.1924 | 400.0 | 3200 | 1.3203 | 0.5734 | 0.5746 |
| 0.1858 | 425.0 | 3400 | 1.3441 | 0.5726 | 0.5725 |
| 0.1769 | 450.0 | 3600 | 1.4282 | 0.5777 | 0.5799 |
| 0.1709 | 475.0 | 3800 | 1.3460 | 0.5747 | 0.5746 |
| 0.1657 | 500.0 | 4000 | 1.4411 | 0.5786 | 0.5799 |
| 0.1593 | 525.0 | 4200 | 1.4218 | 0.5760 | 0.5762 |
| 0.1531 | 550.0 | 4400 | 1.4388 | 0.5779 | 0.5778 |
| 0.149 | 575.0 | 4600 | 1.4608 | 0.5846 | 0.5847 |
| 0.1436 | 600.0 | 4800 | 1.5210 | 0.5753 | 0.5751 |
| 0.1387 | 625.0 | 5000 | 1.4945 | 0.5728 | 0.5746 |
| 0.135 | 650.0 | 5200 | 1.4683 | 0.5758 | 0.5757 |
| 0.1309 | 675.0 | 5400 | 1.5132 | 0.5778 | 0.5778 |
| 0.1273 | 700.0 | 5600 | 1.5091 | 0.5774 | 0.5773 |
| 0.123 | 725.0 | 5800 | 1.5439 | 0.5752 | 0.5751 |
| 0.1204 | 750.0 | 6000 | 1.5301 | 0.5779 | 0.5783 |
| 0.1169 | 775.0 | 6200 | 1.5173 | 0.5760 | 0.5762 |
| 0.1136 | 800.0 | 6400 | 1.6579 | 0.5741 | 0.5741 |
| 0.1108 | 825.0 | 6600 | 1.5620 | 0.5764 | 0.5762 |
| 0.1078 | 850.0 | 6800 | 1.5851 | 0.5789 | 0.5789 |
| 0.1062 | 875.0 | 7000 | 1.5523 | 0.5811 | 0.5810 |
| 0.1041 | 900.0 | 7200 | 1.6114 | 0.5795 | 0.5794 |
| 0.1007 | 925.0 | 7400 | 1.6153 | 0.5790 | 0.5789 |
| 0.0986 | 950.0 | 7600 | 1.6113 | 0.5805 | 0.5805 |
| 0.0978 | 975.0 | 7800 | 1.6665 | 0.5795 | 0.5794 |
| 0.0949 | 1000.0 | 8000 | 1.6287 | 0.5785 | 0.5783 |
| 0.0941 | 1025.0 | 8200 | 1.6481 | 0.5762 | 0.5762 |
| 0.0927 | 1050.0 | 8400 | 1.6820 | 0.5769 | 0.5767 |
| 0.0907 | 1075.0 | 8600 | 1.6810 | 0.5758 | 0.5757 |
| 0.0902 | 1100.0 | 8800 | 1.6807 | 0.5726 | 0.5725 |
| 0.0884 | 1125.0 | 9000 | 1.7244 | 0.5789 | 0.5789 |
| 0.0883 | 1150.0 | 9200 | 1.6739 | 0.5784 | 0.5783 |
| 0.0875 | 1175.0 | 9400 | 1.6800 | 0.5780 | 0.5778 |
| 0.0874 | 1200.0 | 9600 | 1.6942 | 0.5780 | 0.5778 |
| 0.0849 | 1225.0 | 9800 | 1.7149 | 0.5780 | 0.5778 |
| 0.0858 | 1250.0 | 10000 | 1.7009 | 0.5780 | 0.5778 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_mouse_4-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T11:40:53+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_mouse\_4-seqsight\_16384\_512\_34M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2933
* F1 Score: 0.5728
* Accuracy: 0.5730
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Reynaerde-7B
This model is a fine-tuned version of [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) on the rebatch/ultrachat_200k_nl_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1485 | 1.0 | 636 | 1.1369 |
| 1.076 | 2.0 | 1273 | 1.0962 |
| 1.0204 | 3.0 | 1909 | 1.0898 |
| 0.9605 | 4.0 | 2546 | 1.0984 |
| 0.9139 | 5.0 | 3180 | 1.1054 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["rebatch/ultrachat_200k_nl_v1"], "base_model": "mistral-community/Mistral-7B-v0.2", "model-index": [{"name": "Reynaerde-7B", "results": []}]} | vandeju/Reynaerde-7B | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:rebatch/ultrachat_200k_nl_v1",
"base_model:mistral-community/Mistral-7B-v0.2",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:42:53+00:00 | []
| []
| TAGS
#peft #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-rebatch/ultrachat_200k_nl_v1 #base_model-mistral-community/Mistral-7B-v0.2 #license-apache-2.0 #region-us
| Reynaerde-7B
============
This model is a fine-tuned version of mistral-community/Mistral-7B-v0.2 on the rebatch/ultrachat\_200k\_nl\_v1 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1054
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 3
* eval\_batch\_size: 6
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 12
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.39.0.dev0
* Pytorch 2.2.0+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 6\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.0+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #mistral #alignment-handbook #trl #sft #generated_from_trainer #dataset-rebatch/ultrachat_200k_nl_v1 #base_model-mistral-community/Mistral-7B-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 6\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.0+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
]
|
text-to-image | null |
## PornMaster
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - "Ejaculation" is recommended to be used with lora who have a complete penis appearing in the picture, in order to increase the success rate of a good penis.
2024/01/31:
cum_v3基于PornMaster-newV1训练。
cum_v3 is trained based on PornMaster-newV1.
cum_v3增加更多的培训照片(共1659张),改善培训字幕提示词的精准度。
cum_v3 adds more training photos (1659 in total) to improve the accuracy of training subtitle prompts.
[](https://imagepipeline.io/models/PornMaster?id=b9fc2ba5-1495-4213-84a4-dada595aadc0/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "b9fc2ba5-1495-4213-84a4-dada595aadc0",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
| {"license": "creativeml-openrail-m", "tags": ["imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic"], "pinned": false, "pipeline_tag": "text-to-image"} | imagepipeline/PornMaster | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-04-16T11:44:55+00:00 | []
| []
| TAGS
#imagepipeline #imagepipeline.io #text-to-image #ultra-realistic #license-creativeml-openrail-m #region-us
| PornMaster
----------
<img src="URL alt="Generated on Image Pipeline" style="border-radius: 10px;">
This lora model is uploaded on URL
Model details - "Ejaculation" is recommended to be used with lora who have a complete penis appearing in the picture, in order to increase the success rate of a good penis.
2024/01/31:
cum\_v3基于PornMaster-newV1训练。
cum\_v3 is trained based on PornMaster-newV1.
cum\_v3增加更多的培训照片(共1659张),改善培训字幕提示词的精准度。
cum\_v3 adds more training photos (1659 in total) to improve the accuracy of training subtitle prompts.
 (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/bhfmdqu | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T11:45:08+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6502
- F1 Score: 0.6921
- Accuracy: 0.6946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4382 | 100.0 | 200 | 1.1878 | 0.6056 | 0.6067 |
| 0.0781 | 200.0 | 400 | 1.6243 | 0.6575 | 0.6611 |
| 0.0362 | 300.0 | 600 | 1.8299 | 0.6518 | 0.6527 |
| 0.022 | 400.0 | 800 | 2.0487 | 0.6360 | 0.6360 |
| 0.0157 | 500.0 | 1000 | 2.1835 | 0.6710 | 0.6736 |
| 0.011 | 600.0 | 1200 | 2.3505 | 0.6595 | 0.6611 |
| 0.0082 | 700.0 | 1400 | 2.4711 | 0.6566 | 0.6569 |
| 0.007 | 800.0 | 1600 | 2.5344 | 0.6275 | 0.6276 |
| 0.0056 | 900.0 | 1800 | 2.5517 | 0.6611 | 0.6611 |
| 0.0055 | 1000.0 | 2000 | 2.4276 | 0.6733 | 0.6736 |
| 0.0047 | 1100.0 | 2200 | 2.4932 | 0.6763 | 0.6778 |
| 0.0039 | 1200.0 | 2400 | 2.6265 | 0.6820 | 0.6820 |
| 0.003 | 1300.0 | 2600 | 2.7688 | 0.6943 | 0.6946 |
| 0.0029 | 1400.0 | 2800 | 2.9599 | 0.6794 | 0.6820 |
| 0.0028 | 1500.0 | 3000 | 2.7376 | 0.6883 | 0.6904 |
| 0.0021 | 1600.0 | 3200 | 2.9462 | 0.6887 | 0.6904 |
| 0.0028 | 1700.0 | 3400 | 2.7748 | 0.6759 | 0.6778 |
| 0.0024 | 1800.0 | 3600 | 2.8725 | 0.6652 | 0.6653 |
| 0.0018 | 1900.0 | 3800 | 2.9466 | 0.6923 | 0.6946 |
| 0.0017 | 2000.0 | 4000 | 2.8414 | 0.6813 | 0.6820 |
| 0.0013 | 2100.0 | 4200 | 3.1584 | 0.6763 | 0.6778 |
| 0.0018 | 2200.0 | 4400 | 2.7699 | 0.6670 | 0.6695 |
| 0.0023 | 2300.0 | 4600 | 2.6547 | 0.6984 | 0.6987 |
| 0.0012 | 2400.0 | 4800 | 3.0406 | 0.6763 | 0.6778 |
| 0.0011 | 2500.0 | 5000 | 3.1928 | 0.6610 | 0.6611 |
| 0.0013 | 2600.0 | 5200 | 3.3060 | 0.6608 | 0.6611 |
| 0.0015 | 2700.0 | 5400 | 2.8897 | 0.6899 | 0.6904 |
| 0.001 | 2800.0 | 5600 | 3.1127 | 0.6931 | 0.6946 |
| 0.0011 | 2900.0 | 5800 | 3.0047 | 0.6902 | 0.6904 |
| 0.0008 | 3000.0 | 6000 | 3.2674 | 0.6776 | 0.6778 |
| 0.0012 | 3100.0 | 6200 | 3.2108 | 0.6513 | 0.6527 |
| 0.0008 | 3200.0 | 6400 | 3.2096 | 0.6778 | 0.6778 |
| 0.001 | 3300.0 | 6600 | 3.2597 | 0.6818 | 0.6820 |
| 0.001 | 3400.0 | 6800 | 3.2342 | 0.7087 | 0.7113 |
| 0.0008 | 3500.0 | 7000 | 3.1988 | 0.6778 | 0.6778 |
| 0.0008 | 3600.0 | 7200 | 3.1834 | 0.6942 | 0.6946 |
| 0.0007 | 3700.0 | 7400 | 3.3311 | 0.6902 | 0.6904 |
| 0.0005 | 3800.0 | 7600 | 3.5113 | 0.6609 | 0.6611 |
| 0.0008 | 3900.0 | 7800 | 3.4407 | 0.6691 | 0.6695 |
| 0.0008 | 4000.0 | 8000 | 3.0628 | 0.6820 | 0.6820 |
| 0.0006 | 4100.0 | 8200 | 3.2045 | 0.6975 | 0.6987 |
| 0.0006 | 4200.0 | 8400 | 3.1550 | 0.6986 | 0.6987 |
| 0.0006 | 4300.0 | 8600 | 3.0976 | 0.6860 | 0.6862 |
| 0.0005 | 4400.0 | 8800 | 3.2861 | 0.7024 | 0.7029 |
| 0.0005 | 4500.0 | 9000 | 3.2153 | 0.7068 | 0.7071 |
| 0.0006 | 4600.0 | 9200 | 3.1732 | 0.7026 | 0.7029 |
| 0.0004 | 4700.0 | 9400 | 3.1779 | 0.6984 | 0.6987 |
| 0.0002 | 4800.0 | 9600 | 3.2811 | 0.7026 | 0.7029 |
| 0.0003 | 4900.0 | 9800 | 3.2876 | 0.6984 | 0.6987 |
| 0.0004 | 5000.0 | 10000 | 3.2985 | 0.7026 | 0.7029 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_mouse_3-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T11:45:15+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_mouse\_3-seqsight\_16384\_512\_34M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6502
* F1 Score: 0.6921
* Accuracy: 0.6946
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "233.49 +/- 60.07", "name": "mean_reward", "verified": false}]}]}]} | rwr20/240416_RR_ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| null | 2024-04-16T11:47:42+00:00 | []
| []
| TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
]
| [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
]
|
text-to-image | null |
## POV-Blowjob-Creampie-LoRA
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - This LoRA is for creating "POV" and "side view" blowjob images with cum-in-mouth functionality as well as deepthroats.
[](https://imagepipeline.io/models/POV-Blowjob-Creampie-LoRA?id=596b9e64-7ab5-4fdd-bfaf-b183892bb8af/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "596b9e64-7ab5-4fdd-bfaf-b183892bb8af",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
| {"license": "creativeml-openrail-m", "tags": ["imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic"], "pinned": false, "pipeline_tag": "text-to-image"} | imagepipeline/POV-Blowjob-Creampie-LoRA | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-04-16T11:49:40+00:00 | []
| []
| TAGS
#imagepipeline #imagepipeline.io #text-to-image #ultra-realistic #license-creativeml-openrail-m #region-us
| POV-Blowjob-Creampie-LoRA
-------------------------
<img src="URL alt="Generated on Image Pipeline" style="border-radius: 10px;">
This lora model is uploaded on URL
Model details - This LoRA is for creating "POV" and "side view" blowjob images with cum-in-mouth functionality as well as deepthroats.
 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "pythia-1b-2024-04-16-13-51-p4yN3", "results": []}]} | frenkd/pythia-1b-2024-04-16-13-51-p4yN3 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:51:07+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-1b #license-apache-2.0 #region-us
|
# pythia-1b-2024-04-16-13-51-p4yN3
This model is a fine-tuned version of EleutherAI/pythia-1b on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# pythia-1b-2024-04-16-13-51-p4yN3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-1b #license-apache-2.0 #region-us \n",
"# pythia-1b-2024-04-16-13-51-p4yN3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-2024-04-16-13-51-vXQaz
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "pythia-410m-2024-04-16-13-51-vXQaz", "results": []}]} | frenkd/pythia-410m-2024-04-16-13-51-vXQaz | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:51:11+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-410m #license-apache-2.0 #region-us
|
# pythia-410m-2024-04-16-13-51-vXQaz
This model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# pythia-410m-2024-04-16-13-51-vXQaz\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-410m #license-apache-2.0 #region-us \n",
"# pythia-410m-2024-04-16-13-51-vXQaz\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-2024-04-16-13-51-UDGvo
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "pythia-410m-2024-04-16-13-51-UDGvo", "results": []}]} | frenkd/pythia-410m-2024-04-16-13-51-UDGvo | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:51:11+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-410m #license-apache-2.0 #region-us
|
# pythia-410m-2024-04-16-13-51-UDGvo
This model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# pythia-410m-2024-04-16-13-51-UDGvo\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-EleutherAI/pythia-410m #license-apache-2.0 #region-us \n",
"# pythia-410m-2024-04-16-13-51-UDGvo\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-2024-04-16-13-51-ueg2C
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1_5-2024-04-16-13-51-ueg2C", "results": []}]} | frenkd/phi-1_5-2024-04-16-13-51-ueg2C | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
]
| null | 2024-04-16T11:51:15+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us
|
# phi-1_5-2024-04-16-13-51-ueg2C
This model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-1_5-2024-04-16-13-51-ueg2C\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us \n",
"# phi-1_5-2024-04-16-13-51-ueg2C\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | null |
# Antler-7B-RP-v2-GGUF
## 概要
[Aratako/Antler-7B-RP](https://huggingface.co/Aratako/Antler-7B-RP-v2)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。 | {"language": ["ja"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["grimulkan/LimaRP-augmented", "Aratako/Rosebleu-1on1-Dialogues-RP"], "base_model": ["Aratako/Antler-7B-RP-v2"]} | Aratako/Antler-7B-RP-v2-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"ja",
"dataset:grimulkan/LimaRP-augmented",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP",
"base_model:Aratako/Antler-7B-RP-v2",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:53:24+00:00 | []
| [
"ja"
]
| TAGS
#gguf #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v2 #license-apache-2.0 #region-us
|
# Antler-7B-RP-v2-GGUF
## 概要
Aratako/Antler-7B-RPの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。 | [
"# Antler-7B-RP-v2-GGUF",
"## 概要\nAratako/Antler-7B-RPの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。"
]
| [
"TAGS\n#gguf #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v2 #license-apache-2.0 #region-us \n",
"# Antler-7B-RP-v2-GGUF",
"## 概要\nAratako/Antler-7B-RPの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_envs_claim
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.0a0+29c30b1
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/zephyr-7B-alpha-GPTQ", "model-index": [{"name": "mistral_envs_claim", "results": []}]} | Haimee/mistral_envs_claim | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
]
| null | 2024-04-16T11:53:31+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/zephyr-7B-alpha-GPTQ #license-mit #region-us
|
# mistral_envs_claim
This model is a fine-tuned version of TheBloke/zephyr-7B-alpha-GPTQ on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.0a0+29c30b1
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# mistral_envs_claim\n\nThis model is a fine-tuned version of TheBloke/zephyr-7B-alpha-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 40\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 500\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.0a0+29c30b1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/zephyr-7B-alpha-GPTQ #license-mit #region-us \n",
"# mistral_envs_claim\n\nThis model is a fine-tuned version of TheBloke/zephyr-7B-alpha-GPTQ on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 40\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 500\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.0a0+29c30b1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | null |
### How to use
###
```
from transformers import LlamaTokenizerFast
tokenizer = LlamaTokenizerFast.from_pretrained("mimir-project/tokenizer", token=True)
```
or
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mimir-project/tokenizer", token=True)
```
Copied from https://github.com/SmartmediaAI/MIMIR-project/tree/main
| {"license": "apache-2.0"} | mimir-project/mimir-tokenizer-base | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T11:54:44+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
### How to use
###
or
Copied from URL
| [
"### How to use",
"### \n\nor\n\n\nCopied from URL"
]
| [
"TAGS\n#license-apache-2.0 #region-us \n",
"### How to use",
"### \n\nor\n\n\nCopied from URL"
]
|
text-generation | transformers |
# rinna-3.6-dare0
line-3.6b-dare1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rinna/japanese-gpt-neox-3.6b-instruction-sft-v2](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2)
* [rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 24]
model: rinna/japanese-gpt-neox-3.6b-instruction-sft-v2
parameters:
density: [1, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 24]
model: rinna/japanese-gpt-neox-3.6b
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: rinna/japanese-gpt-neox-3.6b-instruction-sft-v2
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/line-3.6b-dare1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "rinna/japanese-gpt-neox-3.6b-instruction-sft-v2", "rinna/japanese-gpt-neox-3.6b"], "base_model": ["rinna/japanese-gpt-neox-3.6b-instruction-sft-v2", "rinna/japanese-gpt-neox-3.6b"]} | aipib/rinna-3.6-dare0 | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rinna/japanese-gpt-neox-3.6b-instruction-sft-v2",
"rinna/japanese-gpt-neox-3.6b",
"base_model:rinna/japanese-gpt-neox-3.6b-instruction-sft-v2",
"base_model:rinna/japanese-gpt-neox-3.6b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T11:55:19+00:00 | []
| []
| TAGS
#transformers #safetensors #gpt_neox #text-generation #merge #mergekit #lazymergekit #rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 #rinna/japanese-gpt-neox-3.6b #base_model-rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 #base_model-rinna/japanese-gpt-neox-3.6b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# rinna-3.6-dare0
line-3.6b-dare1 is a merge of the following models using LazyMergekit:
* rinna/japanese-gpt-neox-3.6b-instruction-sft-v2
* rinna/japanese-gpt-neox-3.6b
## Configuration
## Usage
| [
"# rinna-3.6-dare0\n\nline-3.6b-dare1 is a merge of the following models using LazyMergekit:\n* rinna/japanese-gpt-neox-3.6b-instruction-sft-v2\n* rinna/japanese-gpt-neox-3.6b",
"## Configuration",
"## Usage"
]
| [
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #merge #mergekit #lazymergekit #rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 #rinna/japanese-gpt-neox-3.6b #base_model-rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 #base_model-rinna/japanese-gpt-neox-3.6b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# rinna-3.6-dare0\n\nline-3.6b-dare1 is a merge of the following models using LazyMergekit:\n* rinna/japanese-gpt-neox-3.6b-instruction-sft-v2\n* rinna/japanese-gpt-neox-3.6b",
"## Configuration",
"## Usage"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0255 | 1.0 | 63 | 0.5661 |
| 0.3616 | 2.0 | 126 | 0.3047 |
| 0.1979 | 3.0 | 189 | 0.2129 |
| 0.1565 | 4.0 | 252 | 0.1817 |
| 0.1409 | 5.0 | 315 | 0.1644 |
| 0.1319 | 6.0 | 378 | 0.1561 |
| 0.1277 | 7.0 | 441 | 0.1536 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "working", "results": []}]} | Surabhi-K/working | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-04-16T11:55:22+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
| working
=======
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1536
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 3
* eval\_batch\_size: 3
* seed: 42
* gradient\_accumulation\_steps: 5
* total\_train\_batch\_size: 15
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 20
* num\_epochs: 7
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2
* Datasets 2.15.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 20\n* num\\_epochs: 7\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 20\n* num\\_epochs: 7\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2"
]
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [cyh002/DISTILBERT-IMDB-HUGGINGFACE](https://huggingface.co/cyh002/DISTILBERT-IMDB-HUGGINGFACE) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "cyh002/DISTILBERT-IMDB-HUGGINGFACE", "model-index": [{"name": "results", "results": []}]} | cyh002/results | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:cyh002/DISTILBERT-IMDB-HUGGINGFACE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T11:56:54+00:00 | []
| []
| TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-cyh002/DISTILBERT-IMDB-HUGGINGFACE #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of cyh002/DISTILBERT-IMDB-HUGGINGFACE on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# results\n\nThis model is a fine-tuned version of cyh002/DISTILBERT-IMDB-HUGGINGFACE on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
]
| [
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-cyh002/DISTILBERT-IMDB-HUGGINGFACE #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of cyh002/DISTILBERT-IMDB-HUGGINGFACE on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
]
|
text-to-image | diffusers |
# SDXL LoRA DreamBooth - kuei1026/3d-icon-sdxl-dora
<Gallery />
## Model description
### These are kuei1026/3d-icon-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`3d-icon-sdxl-dora.safetensors` here 💾](/kuei1026/3d-icon-sdxl-dora/blob/main/3d-icon-sdxl-dora.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:3d-icon-sdxl-dora:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`3d-icon-sdxl-dora_emb.safetensors` here 💾](/kuei1026/3d-icon-sdxl-dora/blob/main/3d-icon-sdxl-dora_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `3d-icon-sdxl-dora_emb` to your prompt. For example, `3d icon in the style of 3d-icon-sdxl-dora_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kuei1026/3d-icon-sdxl-dora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='kuei1026/3d-icon-sdxl-dora', filename='3d-icon-sdxl-dora_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/kuei1026/3d-icon-sdxl-dora/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| {"license": "openrail++", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "widget": [{"text": "a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>", "output": {"url": "image_0.png"}}, {"text": "a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>", "output": {"url": "image_1.png"}}, {"text": "a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>", "output": {"url": "image_2.png"}}, {"text": "a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>", "output": {"url": "image_3.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "3d icon in the style of <s0><s1>"} | kuei1026/3d-icon-sdxl-dora | null | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| null | 2024-04-16T11:58:23+00:00 | []
| []
| TAGS
#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - kuei1026/3d-icon-sdxl-dora
<Gallery />
## Model description
### These are kuei1026/3d-icon-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- LoRA: download '3d-icon-sdxl-dora.safetensors' here .
- Place it on your 'models/Lora' folder.
- On AUTOMATIC1111, load the LoRA by adding '<lora:3d-icon-sdxl-dora:1>' to your prompt. On ComfyUI just load it as a regular LoRA.
- *Embeddings*: download '3d-icon-sdxl-dora_emb.safetensors' here .
- Place it on it on your 'embeddings' folder
- Use it by adding '3d-icon-sdxl-dora_emb' to your prompt. For example, '3d icon in the style of 3d-icon-sdxl-dora_emb'
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the diffusers library
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept 'TOK' → use '<s0><s1>' in your prompt
## Details
All Files & versions.
The weights were trained using diffusers Advanced Dreambooth Training Script.
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| [
"# SDXL LoRA DreamBooth - kuei1026/3d-icon-sdxl-dora\n\n<Gallery />",
"## Model description",
"### These are kuei1026/3d-icon-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.",
"## Download model",
"### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download '3d-icon-sdxl-dora.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:3d-icon-sdxl-dora:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download '3d-icon-sdxl-dora_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding '3d-icon-sdxl-dora_emb' to your prompt. For example, '3d icon in the style of 3d-icon-sdxl-dora_emb'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers",
"## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt",
"## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix."
]
| [
"TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - kuei1026/3d-icon-sdxl-dora\n\n<Gallery />",
"## Model description",
"### These are kuei1026/3d-icon-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.",
"## Download model",
"### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download '3d-icon-sdxl-dora.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:3d-icon-sdxl-dora:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download '3d-icon-sdxl-dora_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding '3d-icon-sdxl-dora_emb' to your prompt. For example, '3d icon in the style of 3d-icon-sdxl-dora_emb'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers",
"## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt",
"## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix."
]
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-finetuned-intent-detection
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6938
- Accuracy: 0.8638
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.0316 | 1.0 | 180 | 1.7788 | 0.6819 | 0.6352 |
| 1.4515 | 2.0 | 360 | 1.0539 | 0.7956 | 0.7735 |
| 0.9212 | 3.0 | 540 | 0.8143 | 0.8457 | 0.8382 |
| 0.6883 | 4.0 | 720 | 0.7246 | 0.8601 | 0.8544 |
| 0.583 | 5.0 | 900 | 0.6938 | 0.8638 | 0.8593 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "bert-base-cased", "model-index": [{"name": "intent-finetuned-intent-detection", "results": []}]} | HowMannyMore/bert-intent-amazon | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T11:59:50+00:00 | []
| []
| TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| intent-finetuned-intent-detection
=================================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6938
* Accuracy: 0.8638
* F1: 0.8593
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
null | null | Urocin ক্যাপসুল কি?
Urocin ট্যাবলেট হল একটি বিশেষ খাদ্যতালিকাগত সম্পূরক যা পুরুষদের প্রোস্টেট স্বাস্থ্যকে সমর্থন করার জন্য সতর্কতার সাথে তৈরি করা হয়েছে। প্রোস্টেট ফাংশন এবং মূত্রনালীর স্বাস্থ্যের প্রচারে তাদের কার্যকারিতার জন্য পরিচিত প্রাকৃতিক উপাদানগুলির মিশ্রণের সাথে তৈরি, Urocin পর্যালোচনা এর লক্ষ্য প্রোস্টেট সমস্যাগুলির সাথে সম্পর্কিত সাধারণ উদ্বেগগুলিকে সমাধান করা। এটি ঘন ঘন প্রস্রাব, অস্বস্তি, বা জীবনযাত্রার মানকে প্রভাবিত করে এমন অন্যান্য উপসর্গই হোক না কেন, Urocin দাম পুরুষদের স্বাস্থ্যের প্রয়োজন অনুসারে একটি সমাধান সরবরাহ করে।
সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/Urocbangs">www.Urocin.com</a>
<p><a href="https://www.nutritionsee.com/Urocbangs"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Urocin-Bangladesh.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/Urocbangs">এখন কেন!! আরও তথ্যের জন্য নীচের লিঙ্কে ক্লিক করুন এবং এখনই 50% ছাড় পান... তাড়াতাড়ি করুন</a>
সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/Urocbangs">www.Urocin.com</a> | {"license": "apache-2.0"} | UrocinBangladesh/UrocinBangladesh | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:00:24+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| Urocin ক্যাপসুল কি?
Urocin ট্যাবলেট হল একটি বিশেষ খাদ্যতালিকাগত সম্পূরক যা পুরুষদের প্রোস্টেট স্বাস্থ্যকে সমর্থন করার জন্য সতর্কতার সাথে তৈরি করা হয়েছে। প্রোস্টেট ফাংশন এবং মূত্রনালীর স্বাস্থ্যের প্রচারে তাদের কার্যকারিতার জন্য পরিচিত প্রাকৃতিক উপাদানগুলির মিশ্রণের সাথে তৈরি, Urocin পর্যালোচনা এর লক্ষ্য প্রোস্টেট সমস্যাগুলির সাথে সম্পর্কিত সাধারণ উদ্বেগগুলিকে সমাধান করা। এটি ঘন ঘন প্রস্রাব, অস্বস্তি, বা জীবনযাত্রার মানকে প্রভাবিত করে এমন অন্যান্য উপসর্গই হোক না কেন, Urocin দাম পুরুষদের স্বাস্থ্যের প্রয়োজন অনুসারে একটি সমাধান সরবরাহ করে।
সরকারী ওয়েবসাইট:<a href="URL
<p><a href="URL <img src="URL alt="enter image description here"> </a></p>
<a href="URL>এখন কেন!! আরও তথ্যের জন্য নীচের লিঙ্কে ক্লিক করুন এবং এখনই 50% ছাড় পান... তাড়াতাড়ি করুন</a>
সরকারী ওয়েবসাইট:<a href="URL | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | idoru/jetmoe-8b-ultrainteract-sft-v4 | null | [
"transformers",
"safetensors",
"jetmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:01:36+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #jetmoe #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #jetmoe #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maldv/hyperdrive-7b-alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF/resolve/main/hyperdrive-7b-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["unsloth", "book"], "base_model": "maldv/hyperdrive-7b-alpha", "quantized_by": "mradermacher"} | mradermacher/hyperdrive-7b-alpha-GGUF | null | [
"transformers",
"gguf",
"unsloth",
"book",
"en",
"base_model:maldv/hyperdrive-7b-alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:04:05+00:00 | []
| [
"en"
]
| TAGS
#transformers #gguf #unsloth #book #en #base_model-maldv/hyperdrive-7b-alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| []
| [
"TAGS\n#transformers #gguf #unsloth #book #en #base_model-maldv/hyperdrive-7b-alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
]
|
text-generation | transformers |

# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 8x7B-DPO
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
## Model description
- **Model type:** A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-MoE-alpha](https://huggingface.co/yentinglin/Taiwan-LLM-MoE-alpha)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance
Checkout leaderboard in [Tw Chatbot Arena](https://arena.twllm.com/)
TMMLUS+ score:
- yentinglin/Taiwan-LLM-MoE-alpha: 43.93
- yentinglin/Taiwan-LLM-8x7B-DPO: TBD
## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-8x7B-DPO", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Citation
If you find Taiwan LLM useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": ["zh"], "license": "apache-2.0", "library_name": "transformers", "widget": [{"text": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: \u4f60\u597d\uff0c\u8acb\u554f\u4f60\u53ef\u4ee5\u5e6b\u6211\u5beb\u4e00\u5c01\u63a8\u85a6\u4fe1\u55ce\uff1f ASSISTANT:"}], "pipeline_tag": "text-generation"} | ZoneTwelve/Taiwan-LLM-8x7B-DPO-GGUF | null | [
"transformers",
"gguf",
"text-generation",
"zh",
"arxiv:2311.17487",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:04:49+00:00 | [
"2311.17487"
]
| [
"zh"
]
| TAGS
#transformers #gguf #text-generation #zh #arxiv-2311.17487 #license-apache-2.0 #endpoints_compatible #region-us
|
!image/png
# Checkout Taiwan-LLM Demo Chat-UI
# Model Card for Taiwan LLM 8x7B-DPO
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
## Model description
- Model type: A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)
- Finetuned from model: yentinglin/Taiwan-LLM-MoE-alpha
### Model Sources
- Repository: URL
- Demo: URL
## Performance
Checkout leaderboard in Tw Chatbot Arena
TMMLUS+ score:
- yentinglin/Taiwan-LLM-MoE-alpha: 43.93
- yentinglin/Taiwan-LLM-8x7B-DPO: TBD
## Intended uses
Here's how you can run the model using the 'pipeline()' function from Transformers:
If you find Taiwan LLM useful in your work, please cite it with:
| [
"# Checkout Taiwan-LLM Demo Chat-UI",
"# Model Card for Taiwan LLM 8x7B-DPO\n\nTaiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.",
"## Model description\n\n- Model type: A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)\n- Finetuned from model: yentinglin/Taiwan-LLM-MoE-alpha",
"### Model Sources\n\n\n\n- Repository: URL\n- Demo: URL",
"## Performance\n\nCheckout leaderboard in Tw Chatbot Arena\n\nTMMLUS+ score: \n- yentinglin/Taiwan-LLM-MoE-alpha: 43.93\n- yentinglin/Taiwan-LLM-8x7B-DPO: TBD",
"## Intended uses\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\n\nIf you find Taiwan LLM useful in your work, please cite it with:"
]
| [
"TAGS\n#transformers #gguf #text-generation #zh #arxiv-2311.17487 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Checkout Taiwan-LLM Demo Chat-UI",
"# Model Card for Taiwan LLM 8x7B-DPO\n\nTaiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.",
"## Model description\n\n- Model type: A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)\n- Finetuned from model: yentinglin/Taiwan-LLM-MoE-alpha",
"### Model Sources\n\n\n\n- Repository: URL\n- Demo: URL",
"## Performance\n\nCheckout leaderboard in Tw Chatbot Arena\n\nTMMLUS+ score: \n- yentinglin/Taiwan-LLM-MoE-alpha: 43.93\n- yentinglin/Taiwan-LLM-8x7B-DPO: TBD",
"## Intended uses\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\n\nIf you find Taiwan LLM useful in your work, please cite it with:"
]
|
null | adapter-transformers |
# Adapter `jgrc3/unipelt_adapter_classification_trained` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("jgrc3/unipelt_adapter_classification_trained", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]} | jgrc3/unipelt_adapter_classification_trained | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
]
| null | 2024-04-16T12:05:49+00:00 | []
| []
| TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'jgrc3/unipelt_adapter_classification_trained' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'jgrc3/unipelt_adapter_classification_trained' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
]
| [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'jgrc3/unipelt_adapter_classification_trained' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0255 | 1.0 | 63 | 0.5661 |
| 0.3616 | 2.0 | 126 | 0.3047 |
| 0.1979 | 3.0 | 189 | 0.2129 |
| 0.1565 | 4.0 | 252 | 0.1817 |
| 0.1409 | 5.0 | 315 | 0.1644 |
| 0.1319 | 6.0 | 378 | 0.1561 |
| 0.1277 | 7.0 | 441 | 0.1536 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "working", "results": []}]} | Surabhi-K/code_llama_library2 | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-04-16T12:05:54+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
| working
=======
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1536
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 3
* eval\_batch\_size: 3
* seed: 42
* gradient\_accumulation\_steps: 5
* total\_train\_batch\_size: 15
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 20
* num\_epochs: 7
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2
* Datasets 2.15.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 20\n* num\\_epochs: 7\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 15\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 20\n* num\\_epochs: 7\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_modified
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "tiiuae/falcon-7b", "model-index": [{"name": "results_modified", "results": []}]} | VikrantRamesh/results_modified | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:06:23+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-tiiuae/falcon-7b #license-apache-2.0 #region-us
|
# results_modified
This model is a fine-tuned version of tiiuae/falcon-7b on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 | [
"# results_modified\n\nThis model is a fine-tuned version of tiiuae/falcon-7b on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.38.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-tiiuae/falcon-7b #license-apache-2.0 #region-us \n",
"# results_modified\n\nThis model is a fine-tuned version of tiiuae/falcon-7b on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.38.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
]
|
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | coralexbadea/llama-2-7b-sql | null | [
"peft",
"pytorch",
"llama",
"region:us"
]
| null | 2024-04-16T12:06:36+00:00 | []
| []
| TAGS
#peft #pytorch #llama #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
]
| [
"TAGS\n#peft #pytorch #llama #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.4.0"
]
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token", "results": []}]} | nkasmanoff/idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:10:55+00:00 | []
| []
| TAGS
#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
|
# idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token
This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n",
"# idefics2-8b-docvqa-finetuned-tutorial-adding-fake-img-token\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6883
- F1 Score: 0.8321
- Accuracy: 0.8323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3191 | 100.0 | 200 | 0.9408 | 0.7544 | 0.7561 |
| 0.0532 | 200.0 | 400 | 1.3625 | 0.7397 | 0.7409 |
| 0.0236 | 300.0 | 600 | 1.5797 | 0.7743 | 0.7744 |
| 0.0142 | 400.0 | 800 | 1.6838 | 0.7835 | 0.7835 |
| 0.0092 | 500.0 | 1000 | 1.9073 | 0.7707 | 0.7713 |
| 0.0069 | 600.0 | 1200 | 1.8538 | 0.7713 | 0.7713 |
| 0.0057 | 700.0 | 1400 | 1.9216 | 0.7589 | 0.7591 |
| 0.0043 | 800.0 | 1600 | 1.8786 | 0.7896 | 0.7896 |
| 0.0032 | 900.0 | 1800 | 1.9808 | 0.7713 | 0.7713 |
| 0.0029 | 1000.0 | 2000 | 2.1482 | 0.7955 | 0.7957 |
| 0.0023 | 1100.0 | 2200 | 2.1657 | 0.7805 | 0.7805 |
| 0.0022 | 1200.0 | 2400 | 2.2364 | 0.7462 | 0.7470 |
| 0.0017 | 1300.0 | 2600 | 2.4525 | 0.7462 | 0.7470 |
| 0.0018 | 1400.0 | 2800 | 2.2569 | 0.7436 | 0.7439 |
| 0.0017 | 1500.0 | 3000 | 2.2833 | 0.7802 | 0.7805 |
| 0.0015 | 1600.0 | 3200 | 2.1435 | 0.7680 | 0.7683 |
| 0.0014 | 1700.0 | 3400 | 2.2114 | 0.7617 | 0.7622 |
| 0.0012 | 1800.0 | 3600 | 2.2726 | 0.7744 | 0.7744 |
| 0.001 | 1900.0 | 3800 | 2.4597 | 0.7681 | 0.7683 |
| 0.0011 | 2000.0 | 4000 | 2.2313 | 0.7591 | 0.7591 |
| 0.001 | 2100.0 | 4200 | 2.4415 | 0.7713 | 0.7713 |
| 0.0009 | 2200.0 | 4400 | 2.3375 | 0.7618 | 0.7622 |
| 0.001 | 2300.0 | 4600 | 2.4935 | 0.7584 | 0.7591 |
| 0.0008 | 2400.0 | 4800 | 2.5299 | 0.7525 | 0.7530 |
| 0.0008 | 2500.0 | 5000 | 2.3778 | 0.7835 | 0.7835 |
| 0.0008 | 2600.0 | 5200 | 2.4905 | 0.7896 | 0.7896 |
| 0.0007 | 2700.0 | 5400 | 2.3043 | 0.7866 | 0.7866 |
| 0.0008 | 2800.0 | 5600 | 2.2932 | 0.8016 | 0.8018 |
| 0.0007 | 2900.0 | 5800 | 2.2786 | 0.7835 | 0.7835 |
| 0.0005 | 3000.0 | 6000 | 2.4815 | 0.7774 | 0.7774 |
| 0.0005 | 3100.0 | 6200 | 2.4806 | 0.7896 | 0.7896 |
| 0.0005 | 3200.0 | 6400 | 2.4541 | 0.7650 | 0.7652 |
| 0.0004 | 3300.0 | 6600 | 2.6904 | 0.7741 | 0.7744 |
| 0.0005 | 3400.0 | 6800 | 2.4829 | 0.7743 | 0.7744 |
| 0.0004 | 3500.0 | 7000 | 2.6379 | 0.7743 | 0.7744 |
| 0.0004 | 3600.0 | 7200 | 2.6570 | 0.7560 | 0.7561 |
| 0.0004 | 3700.0 | 7400 | 2.6384 | 0.7773 | 0.7774 |
| 0.0006 | 3800.0 | 7600 | 2.3317 | 0.7497 | 0.75 |
| 0.0004 | 3900.0 | 7800 | 2.4712 | 0.7560 | 0.7561 |
| 0.0003 | 4000.0 | 8000 | 2.6606 | 0.7774 | 0.7774 |
| 0.0002 | 4100.0 | 8200 | 2.9574 | 0.7591 | 0.7591 |
| 0.0003 | 4200.0 | 8400 | 2.7157 | 0.7591 | 0.7591 |
| 0.0002 | 4300.0 | 8600 | 2.8213 | 0.7621 | 0.7622 |
| 0.0003 | 4400.0 | 8800 | 2.8124 | 0.7621 | 0.7622 |
| 0.0003 | 4500.0 | 9000 | 2.7275 | 0.7744 | 0.7744 |
| 0.0003 | 4600.0 | 9200 | 2.6961 | 0.7622 | 0.7622 |
| 0.0002 | 4700.0 | 9400 | 2.6964 | 0.7622 | 0.7622 |
| 0.0002 | 4800.0 | 9600 | 2.7804 | 0.7621 | 0.7622 |
| 0.0002 | 4900.0 | 9800 | 2.7767 | 0.7683 | 0.7683 |
| 0.0002 | 5000.0 | 10000 | 2.7617 | 0.7713 | 0.7713 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_mouse_2-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:17:16+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_mouse\_2-seqsight\_16384\_512\_34M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6883
* F1 Score: 0.8321
* Accuracy: 0.8323
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9933
- F1 Score: 0.6697
- Accuracy: 0.6754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.9633 | 11.11 | 200 | 0.8848 | 0.5525 | 0.6127 |
| 0.83 | 22.22 | 400 | 0.8292 | 0.6114 | 0.6357 |
| 0.746 | 33.33 | 600 | 0.8145 | 0.6225 | 0.6317 |
| 0.688 | 44.44 | 800 | 0.8202 | 0.6294 | 0.6368 |
| 0.6412 | 55.56 | 1000 | 0.8141 | 0.6409 | 0.6451 |
| 0.6039 | 66.67 | 1200 | 0.8174 | 0.6430 | 0.6506 |
| 0.5698 | 77.78 | 1400 | 0.8441 | 0.6500 | 0.6583 |
| 0.5396 | 88.89 | 1600 | 0.8406 | 0.6550 | 0.6572 |
| 0.5132 | 100.0 | 1800 | 0.8789 | 0.6589 | 0.6624 |
| 0.4914 | 111.11 | 2000 | 0.8996 | 0.6607 | 0.6646 |
| 0.4708 | 122.22 | 2200 | 0.9214 | 0.6586 | 0.6600 |
| 0.4551 | 133.33 | 2400 | 0.9033 | 0.6558 | 0.6576 |
| 0.4397 | 144.44 | 2600 | 0.9310 | 0.6599 | 0.6683 |
| 0.4293 | 155.56 | 2800 | 0.9630 | 0.6603 | 0.6648 |
| 0.4172 | 166.67 | 3000 | 0.9411 | 0.6601 | 0.6642 |
| 0.4054 | 177.78 | 3200 | 0.9609 | 0.6572 | 0.6624 |
| 0.397 | 188.89 | 3400 | 0.9626 | 0.6606 | 0.6613 |
| 0.3862 | 200.0 | 3600 | 0.9832 | 0.6614 | 0.6701 |
| 0.3793 | 211.11 | 3800 | 1.0125 | 0.6604 | 0.6686 |
| 0.3724 | 222.22 | 4000 | 0.9947 | 0.6620 | 0.6668 |
| 0.3626 | 233.33 | 4200 | 1.0136 | 0.6590 | 0.6697 |
| 0.3565 | 244.44 | 4400 | 1.0174 | 0.6556 | 0.6635 |
| 0.3497 | 255.56 | 4600 | 1.0378 | 0.6592 | 0.6651 |
| 0.3428 | 266.67 | 4800 | 1.0349 | 0.6663 | 0.6723 |
| 0.3365 | 277.78 | 5000 | 1.0635 | 0.6591 | 0.6668 |
| 0.3313 | 288.89 | 5200 | 1.0348 | 0.6641 | 0.6710 |
| 0.3249 | 300.0 | 5400 | 1.0750 | 0.6628 | 0.6692 |
| 0.321 | 311.11 | 5600 | 1.0751 | 0.6614 | 0.6694 |
| 0.3165 | 322.22 | 5800 | 1.0725 | 0.6611 | 0.6712 |
| 0.3083 | 333.33 | 6000 | 1.0499 | 0.6621 | 0.6679 |
| 0.3053 | 344.44 | 6200 | 1.0859 | 0.6637 | 0.6703 |
| 0.3023 | 355.56 | 6400 | 1.0693 | 0.6607 | 0.6642 |
| 0.2976 | 366.67 | 6600 | 1.1020 | 0.6612 | 0.6701 |
| 0.295 | 377.78 | 6800 | 1.1052 | 0.6623 | 0.6681 |
| 0.291 | 388.89 | 7000 | 1.0961 | 0.6628 | 0.6721 |
| 0.2881 | 400.0 | 7200 | 1.0630 | 0.6658 | 0.6723 |
| 0.2839 | 411.11 | 7400 | 1.0982 | 0.6647 | 0.6714 |
| 0.2805 | 422.22 | 7600 | 1.1069 | 0.6656 | 0.6716 |
| 0.2785 | 433.33 | 7800 | 1.1141 | 0.6627 | 0.6692 |
| 0.277 | 444.44 | 8000 | 1.1178 | 0.6598 | 0.6646 |
| 0.2744 | 455.56 | 8200 | 1.1204 | 0.6649 | 0.6690 |
| 0.2718 | 466.67 | 8400 | 1.1457 | 0.6625 | 0.6699 |
| 0.2705 | 477.78 | 8600 | 1.1266 | 0.6634 | 0.6701 |
| 0.2685 | 488.89 | 8800 | 1.1351 | 0.6614 | 0.6681 |
| 0.2659 | 500.0 | 9000 | 1.1381 | 0.6629 | 0.6688 |
| 0.2634 | 511.11 | 9200 | 1.1448 | 0.6646 | 0.6712 |
| 0.2635 | 522.22 | 9400 | 1.1374 | 0.6655 | 0.6723 |
| 0.2624 | 533.33 | 9600 | 1.1468 | 0.6655 | 0.6730 |
| 0.2613 | 544.44 | 9800 | 1.1308 | 0.6635 | 0.6697 |
| 0.2607 | 555.56 | 10000 | 1.1358 | 0.6650 | 0.6712 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:17:22+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_splice\_reconstructed-seqsight\_16384\_512\_34M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9933
* F1 Score: 0.6697
* Accuracy: 0.6754
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8841
- F1 Score: 0.7045
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6351 | 12.5 | 200 | 0.6045 | 0.6676 | 0.669 |
| 0.5536 | 25.0 | 400 | 0.5857 | 0.7020 | 0.702 |
| 0.5042 | 37.5 | 600 | 0.5943 | 0.6977 | 0.698 |
| 0.4669 | 50.0 | 800 | 0.6088 | 0.7049 | 0.705 |
| 0.4345 | 62.5 | 1000 | 0.6375 | 0.7041 | 0.704 |
| 0.4122 | 75.0 | 1200 | 0.6370 | 0.7128 | 0.713 |
| 0.3928 | 87.5 | 1400 | 0.6897 | 0.6989 | 0.699 |
| 0.3791 | 100.0 | 1600 | 0.6551 | 0.7111 | 0.711 |
| 0.367 | 112.5 | 1800 | 0.6790 | 0.718 | 0.718 |
| 0.3557 | 125.0 | 2000 | 0.6734 | 0.7140 | 0.714 |
| 0.3463 | 137.5 | 2200 | 0.6849 | 0.7160 | 0.716 |
| 0.3376 | 150.0 | 2400 | 0.7136 | 0.7031 | 0.703 |
| 0.329 | 162.5 | 2600 | 0.6945 | 0.7100 | 0.71 |
| 0.3203 | 175.0 | 2800 | 0.7089 | 0.7071 | 0.707 |
| 0.3108 | 187.5 | 3000 | 0.7328 | 0.7080 | 0.708 |
| 0.3022 | 200.0 | 3200 | 0.7508 | 0.7129 | 0.713 |
| 0.2952 | 212.5 | 3400 | 0.7469 | 0.7220 | 0.722 |
| 0.2875 | 225.0 | 3600 | 0.7303 | 0.7191 | 0.719 |
| 0.2798 | 237.5 | 3800 | 0.7748 | 0.7171 | 0.717 |
| 0.272 | 250.0 | 4000 | 0.7447 | 0.7221 | 0.722 |
| 0.2633 | 262.5 | 4200 | 0.7928 | 0.7198 | 0.72 |
| 0.2586 | 275.0 | 4400 | 0.7671 | 0.7171 | 0.717 |
| 0.2525 | 287.5 | 4600 | 0.7622 | 0.7178 | 0.718 |
| 0.2466 | 300.0 | 4800 | 0.7990 | 0.7191 | 0.719 |
| 0.2412 | 312.5 | 5000 | 0.7839 | 0.7190 | 0.719 |
| 0.2341 | 325.0 | 5200 | 0.8102 | 0.7210 | 0.721 |
| 0.2298 | 337.5 | 5400 | 0.8271 | 0.7241 | 0.724 |
| 0.2247 | 350.0 | 5600 | 0.7923 | 0.7141 | 0.714 |
| 0.2196 | 362.5 | 5800 | 0.8095 | 0.7190 | 0.719 |
| 0.2166 | 375.0 | 6000 | 0.8149 | 0.7119 | 0.712 |
| 0.2125 | 387.5 | 6200 | 0.8627 | 0.7171 | 0.717 |
| 0.2089 | 400.0 | 6400 | 0.8287 | 0.7200 | 0.72 |
| 0.2041 | 412.5 | 6600 | 0.8557 | 0.7151 | 0.715 |
| 0.2001 | 425.0 | 6800 | 0.8594 | 0.7171 | 0.717 |
| 0.1963 | 437.5 | 7000 | 0.8711 | 0.7271 | 0.727 |
| 0.194 | 450.0 | 7200 | 0.8716 | 0.7178 | 0.718 |
| 0.1909 | 462.5 | 7400 | 0.8670 | 0.72 | 0.72 |
| 0.1889 | 475.0 | 7600 | 0.8724 | 0.7121 | 0.712 |
| 0.1862 | 487.5 | 7800 | 0.8612 | 0.7121 | 0.712 |
| 0.1844 | 500.0 | 8000 | 0.8704 | 0.7190 | 0.719 |
| 0.1816 | 512.5 | 8200 | 0.8886 | 0.7231 | 0.723 |
| 0.1803 | 525.0 | 8400 | 0.8909 | 0.7200 | 0.72 |
| 0.178 | 537.5 | 8600 | 0.9070 | 0.7241 | 0.724 |
| 0.1761 | 550.0 | 8800 | 0.9036 | 0.7129 | 0.713 |
| 0.1755 | 562.5 | 9000 | 0.8886 | 0.7201 | 0.72 |
| 0.1739 | 575.0 | 9200 | 0.9016 | 0.7160 | 0.716 |
| 0.1724 | 587.5 | 9400 | 0.8964 | 0.7170 | 0.717 |
| 0.1715 | 600.0 | 9600 | 0.9074 | 0.7201 | 0.72 |
| 0.1719 | 612.5 | 9800 | 0.9134 | 0.7191 | 0.719 |
| 0.1701 | 625.0 | 10000 | 0.9091 | 0.7200 | 0.72 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_0-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:18:48+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_0-seqsight\_16384\_512\_34M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8841
* F1 Score: 0.7045
* Accuracy: 0.705
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-krillin20 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:19:10+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v4 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:19:59+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v4 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:20:31+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | null | # Arabic NLP
HuggingFace: https://huggingface.co/rakib72642/Arabic_NLP
sudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=hawkeyes.ngrok.app 8000
git clone https://huggingface.co/rakib72642/Arabic_NLP && cd Arabic_NLP && sudo apt update && sudo apt upgrade && python updated_api.py
cd Arabic_NLP && python updated_api.py
hypercorn updated_api:app --bind 127.0.0.1:8020 --workers 4
config the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS
ngrok http --domain=batnlp.ngrok.app 1111
--------------------------------------------------------------------------------------------------------------------------------
# Old App
config the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS
ngrok http --domain=hawkeyes.ngrok.app 8020
| {} | rakib72642/Arabic_NLP | null | [
"region:us"
]
| null | 2024-04-16T12:20:38+00:00 | []
| []
| TAGS
#region-us
| # Arabic NLP
HuggingFace: URL
sudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo "deb URL buster main" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=URL 8000
git clone URL && cd Arabic_NLP && sudo apt update && sudo apt upgrade && python updated_api.py
cd Arabic_NLP && python updated_api.py
hypercorn updated_api:app --bind 127.0.0.1:8020 --workers 4
config the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS
ngrok http --domain=URL 1111
--------------------------------------------------------------------------------------------------------------------------------
# Old App
config the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS
ngrok http --domain=URL 8020
| [
"# Arabic NLP \nHuggingFace: URL\n\nsudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=URL 8000\n\ngit clone URL && cd Arabic_NLP && sudo apt update && sudo apt upgrade && python updated_api.py\n\ncd Arabic_NLP && python updated_api.py\n\nhypercorn updated_api:app --bind 127.0.0.1:8020 --workers 4\n\n\nconfig the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS\n\nngrok http --domain=URL 1111\n\n--------------------------------------------------------------------------------------------------------------------------------",
"# Old App\nconfig the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS\n\nngrok http --domain=URL 8020"
]
| [
"TAGS\n#region-us \n",
"# Arabic NLP \nHuggingFace: URL\n\nsudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo \"deb URL buster main\" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=URL 8000\n\ngit clone URL && cd Arabic_NLP && sudo apt update && sudo apt upgrade && python updated_api.py\n\ncd Arabic_NLP && python updated_api.py\n\nhypercorn updated_api:app --bind 127.0.0.1:8020 --workers 4\n\n\nconfig the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS\n\nngrok http --domain=URL 1111\n\n--------------------------------------------------------------------------------------------------------------------------------",
"# Old App\nconfig the ngrok auth: ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS\n\nngrok http --domain=URL 8020"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tom-brady/sn6_214 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:20:51+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jjmstd/ko_arc | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:20:53+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | null | HuggingFace: https://huggingface.co/rakib72642/Arabic_OCR
sudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=hawkeyes.ngrok.app 8000
git clone https://huggingface.co/rakib72642/Arabic_OCR && cd Arabic_OCR && pip install -r requirements.txt && sudo apt update && sudo apt upgrade && python ocr_api.py
cd Arabic_OCR && python ocr_api.py
hypercorn ocr_api:app --bind 127.0.0.1:8000 --workers 4
OLD OCR :
# ************************************************************
ngrok config add-authtoken 2Q8xOjna6gvwQRiMTZayN1uEgWy_6uRD8M1b6rZtYMz4yLzAw
ngrok http --domain=dominant-eagerly-deer.ngrok-free.app 8000 | {} | rakib72642/Arabic_OCR | null | [
"region:us"
]
| null | 2024-04-16T12:21:05+00:00 | []
| []
| TAGS
#region-us
| HuggingFace: URL
sudo apt install iproute2 && sudo apt install wget && sudo apt install unzip && sudo apt install nvtop && sudo apt-get install git-lfs && sudo apt-get update && sudo apt-get install libgl1 && curl -s URL | sudo tee /etc/apt/URL.d/URL >/dev/null && echo "deb URL buster main" | sudo tee /etc/apt/URL.d/URL && sudo apt update && sudo apt install ngrok && ngrok config add-authtoken 2Qm8hS1zPhVXiLjEdlI4738tLzF_2QJwGJMK5oTbQD33QSVXS && sudo apt update && sudo apt upgrade && ngrok http --domain=URL 8000
git clone URL && cd Arabic_OCR && pip install -r URL && sudo apt update && sudo apt upgrade && python ocr_api.py
cd Arabic_OCR && python ocr_api.py
hypercorn ocr_api:app --bind 127.0.0.1:8000 --workers 4
OLD OCR :
#
ngrok config add-authtoken 2Q8xOjna6gvwQRiMTZayN1uEgWy_6uRD8M1b6rZtYMz4yLzAw
ngrok http --domain=URL 8000 | [
"# \nngrok config add-authtoken 2Q8xOjna6gvwQRiMTZayN1uEgWy_6uRD8M1b6rZtYMz4yLzAw\n\nngrok http --domain=URL 8000"
]
| [
"TAGS\n#region-us \n",
"# \nngrok config add-authtoken 2Q8xOjna6gvwQRiMTZayN1uEgWy_6uRD8M1b6rZtYMz4yLzAw\n\nngrok http --domain=URL 8000"
]
|
text-generation | null |
# NikolayKozloff/tweety-7b-italian-Q8_0-GGUF
This model was converted to GGUF format from [`DTAI-KULeuven/tweety-7b-italian`](https://huggingface.co/DTAI-KULeuven/tweety-7b-italian) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DTAI-KULeuven/tweety-7b-italian) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/tweety-7b-italian-Q8_0-GGUF --model tweety-7b-italian.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/tweety-7b-italian-Q8_0-GGUF --model tweety-7b-italian.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tweety-7b-italian.Q8_0.gguf -n 128
```
| {"language": ["it"], "license": "apache-2.0", "tags": ["pretrained", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "inference": {"parameters": {"temperature": 0.7}}} | NikolayKozloff/tweety-7b-italian-GGUF | null | [
"gguf",
"pretrained",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"it",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:22:02+00:00 | []
| [
"it"
]
| TAGS
#gguf #pretrained #llama-cpp #gguf-my-repo #text-generation #it #license-apache-2.0 #region-us
|
# NikolayKozloff/tweety-7b-italian-Q8_0-GGUF
This model was converted to GGUF format from 'DTAI-KULeuven/tweety-7b-italian' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# NikolayKozloff/tweety-7b-italian-Q8_0-GGUF\nThis model was converted to GGUF format from 'DTAI-KULeuven/tweety-7b-italian' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
| [
"TAGS\n#gguf #pretrained #llama-cpp #gguf-my-repo #text-generation #it #license-apache-2.0 #region-us \n",
"# NikolayKozloff/tweety-7b-italian-Q8_0-GGUF\nThis model was converted to GGUF format from 'DTAI-KULeuven/tweety-7b-italian' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Undi95/Dawn-v2-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dawn-v2-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dawn-v2-70B-i1-GGUF/resolve/main/Dawn-v2-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw"], "base_model": "Undi95/Dawn-v2-70B", "quantized_by": "mradermacher"} | mradermacher/Dawn-v2-70B-i1-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/Dawn-v2-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:22:37+00:00 | []
| [
"en"
]
| TAGS
#transformers #gguf #not-for-all-audiences #nsfw #en #base_model-Undi95/Dawn-v2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| []
| [
"TAGS\n#transformers #gguf #not-for-all-audiences #nsfw #en #base_model-Undi95/Dawn-v2-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tom-brady/sn6_215 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:23:45+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - linoyts/B-LoRA_Ukiyo-e
<Gallery />
## Model description
These are linoyts/B-LoRA_Ukiyo-e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a [v] to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](linoyts/B-LoRA_Ukiyo-e/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a [v]", "widget": []} | linoyts/B-LoRA_Ukiyo-e | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| null | 2024-04-16T12:23:55+00:00 | []
| []
| TAGS
#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - linoyts/B-LoRA_Ukiyo-e
<Gallery />
## Model description
These are linoyts/B-LoRA_Ukiyo-e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a [v] to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - linoyts/B-LoRA_Ukiyo-e\n\n<Gallery />",
"## Model description\n\nThese are linoyts/B-LoRA_Ukiyo-e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use a [v] to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
]
| [
"TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - linoyts/B-LoRA_Ukiyo-e\n\n<Gallery />",
"## Model description\n\nThese are linoyts/B-LoRA_Ukiyo-e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use a [v] to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
]
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-8x7B-orpo-en-de
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["maxidl/distilabel-capybara-dpo-7k-binarized_en_de"], "base_model": "mistralai/Mixtral-8x7B-v0.1", "model-index": [{"name": "Mixtral-8x7B-orpo-en-de", "results": []}]} | maxidl/Mixtral-8x7B-v0.1-capybara-orpo-en-de | null | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:maxidl/distilabel-capybara-dpo-7k-binarized_en_de",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:25:41+00:00 | []
| []
| TAGS
#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mixtral-8x7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B-orpo-en-de
This model is a fine-tuned version of mistralai/Mixtral-8x7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# Mixtral-8x7B-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mixtral-8x7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 32\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tensorboard #safetensors #mixtral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mixtral-8x7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mixtral-8x7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 32\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6060
- F1 Score: 0.7550
- Accuracy: 0.756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6369 | 13.33 | 200 | 0.6299 | 0.6363 | 0.638 |
| 0.5542 | 26.67 | 400 | 0.6137 | 0.6777 | 0.678 |
| 0.5062 | 40.0 | 600 | 0.6270 | 0.6759 | 0.676 |
| 0.4681 | 53.33 | 800 | 0.6577 | 0.6738 | 0.678 |
| 0.4404 | 66.67 | 1000 | 0.6902 | 0.6766 | 0.677 |
| 0.4191 | 80.0 | 1200 | 0.6859 | 0.6750 | 0.675 |
| 0.4006 | 93.33 | 1400 | 0.7224 | 0.6760 | 0.676 |
| 0.3854 | 106.67 | 1600 | 0.7392 | 0.6750 | 0.675 |
| 0.3729 | 120.0 | 1800 | 0.7254 | 0.678 | 0.678 |
| 0.3614 | 133.33 | 2000 | 0.7658 | 0.6678 | 0.668 |
| 0.3516 | 146.67 | 2200 | 0.7913 | 0.6644 | 0.666 |
| 0.3416 | 160.0 | 2400 | 0.7570 | 0.6686 | 0.669 |
| 0.332 | 173.33 | 2600 | 0.7899 | 0.6665 | 0.667 |
| 0.3241 | 186.67 | 2800 | 0.7745 | 0.6718 | 0.672 |
| 0.3137 | 200.0 | 3000 | 0.7895 | 0.6740 | 0.674 |
| 0.3062 | 213.33 | 3200 | 0.8419 | 0.6554 | 0.657 |
| 0.2962 | 226.67 | 3400 | 0.8059 | 0.6647 | 0.665 |
| 0.2907 | 240.0 | 3600 | 0.8259 | 0.6637 | 0.664 |
| 0.2802 | 253.33 | 3800 | 0.8400 | 0.6799 | 0.68 |
| 0.274 | 266.67 | 4000 | 0.8515 | 0.6719 | 0.672 |
| 0.2663 | 280.0 | 4200 | 0.8330 | 0.6710 | 0.671 |
| 0.2592 | 293.33 | 4400 | 0.9000 | 0.6519 | 0.654 |
| 0.2531 | 306.67 | 4600 | 0.8947 | 0.6710 | 0.671 |
| 0.2469 | 320.0 | 4800 | 0.8786 | 0.6730 | 0.673 |
| 0.2411 | 333.33 | 5000 | 0.8729 | 0.6699 | 0.67 |
| 0.2342 | 346.67 | 5200 | 0.9593 | 0.6615 | 0.662 |
| 0.2291 | 360.0 | 5400 | 0.9318 | 0.6749 | 0.675 |
| 0.2239 | 373.33 | 5600 | 0.9188 | 0.6650 | 0.665 |
| 0.2175 | 386.67 | 5800 | 0.9814 | 0.674 | 0.674 |
| 0.2144 | 400.0 | 6000 | 0.9633 | 0.6644 | 0.665 |
| 0.2097 | 413.33 | 6200 | 0.9543 | 0.6677 | 0.668 |
| 0.2057 | 426.67 | 6400 | 0.9512 | 0.6663 | 0.667 |
| 0.2015 | 440.0 | 6600 | 1.0061 | 0.6689 | 0.669 |
| 0.1985 | 453.33 | 6800 | 0.9815 | 0.6710 | 0.671 |
| 0.1956 | 466.67 | 7000 | 0.9962 | 0.6640 | 0.664 |
| 0.1919 | 480.0 | 7200 | 0.9751 | 0.6619 | 0.662 |
| 0.1884 | 493.33 | 7400 | 1.0484 | 0.6690 | 0.669 |
| 0.1846 | 506.67 | 7600 | 1.0257 | 0.6649 | 0.665 |
| 0.1829 | 520.0 | 7800 | 1.0497 | 0.6665 | 0.667 |
| 0.181 | 533.33 | 8000 | 1.0462 | 0.6644 | 0.665 |
| 0.1774 | 546.67 | 8200 | 1.0495 | 0.6566 | 0.657 |
| 0.1762 | 560.0 | 8400 | 1.0406 | 0.6680 | 0.668 |
| 0.1745 | 573.33 | 8600 | 1.0368 | 0.6659 | 0.666 |
| 0.1725 | 586.67 | 8800 | 1.0453 | 0.6709 | 0.671 |
| 0.1709 | 600.0 | 9000 | 1.0549 | 0.6657 | 0.666 |
| 0.1696 | 613.33 | 9200 | 1.0363 | 0.6700 | 0.67 |
| 0.169 | 626.67 | 9400 | 1.0481 | 0.6698 | 0.67 |
| 0.168 | 640.0 | 9600 | 1.0633 | 0.6648 | 0.665 |
| 0.1689 | 653.33 | 9800 | 1.0543 | 0.6658 | 0.666 |
| 0.1666 | 666.67 | 10000 | 1.0535 | 0.6678 | 0.668 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_1-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:27:14+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_1-seqsight\_16384\_512\_34M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6060
* F1 Score: 0.7550
* Accuracy: 0.756
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | speechbrain |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Conformer for KsponSpeech (with Transformer LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | eval clean CER | eval other CER | GPUs |
| :------: | :------------: | :------------: | :---------: |
| 04-16-24 | 8.20% | 8.99% | 2xA100 40GB |
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
- Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
- Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
!pip install git+https://github.com/speechbrain/speechbrain.git
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Korean)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="ddwkim/asr-conformer-small-transformerlm-ksponspeech", savedir="pretrained_models/asr-conformer-small-transformerlm-ksponspeech", run_opts={"device":"cuda"})
asr_model.transcribe_file("ddwkim/asr-conformer-small-transformerlm-ksponspeech/record_0_16k.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1finp9pfmGRzWHCAPNkqAH2yGH6k_BbPA?usp=sharing) on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: '4b3bf60').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install .
```
3. Run Training:
```bash
cd recipes/KsponSpeech/ASR/transformer
python train.py hparams/conformer_small.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# Citing the model
```bibtex
@misc{,
title = {Conformer small TransformerLM KsponSpeech Korean ASR model},
author = {Dong Won Kim},
year = {2024},
howpublished = {\url{https://huggingface.co/ddwkim/asr-conformer-small-transformerlm-ksponspeech}},
}
```
# Citing KsponSpeech dataset
```bibtex
@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
DOI = {10.3390/app10196936}
}
```
| {"language": "kr", "license": "apache-2.0", "tags": ["ASR", "CTC", "Attention", "Conformer", "pytorch", "speechbrain"], "datasets": ["ksponspeech"], "metrics": ["wer", "cer"]} | ddwkim/asr-conformer-small-transformerlm-ksponspeech | null | [
"speechbrain",
"ASR",
"CTC",
"Attention",
"Conformer",
"pytorch",
"kr",
"dataset:ksponspeech",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:27:42+00:00 | [
"2106.04624"
]
| [
"kr"
]
| TAGS
#speechbrain #ASR #CTC #Attention #Conformer #pytorch #kr #dataset-ksponspeech #arxiv-2106.04624 #license-apache-2.0 #region-us
|
Conformer for KsponSpeech (with Transformer LM)
===============================================
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
SpeechBrain.
The performance of the model is the following:
Pipeline description
--------------------
This ASR system is composed of 3 different but linked blocks:
* Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
* Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
* Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
Install SpeechBrain
-------------------
First of all, please install SpeechBrain with the following command:
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Transcribing your own audio files (in Korean)
### Inference on GPU
To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method.
Parallel Inference on a Batch
-----------------------------
Please, see this Colab notebook on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: '4b3bf60').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
2. Install it:
3. Run Training:
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
About SpeechBrain
=================
* Website: URL
* Code: URL
* HuggingFace: URL
Citing SpeechBrain
==================
Please, cite SpeechBrain if you use it for your research or business.
Citing the model
================
Citing KsponSpeech dataset
==========================
| [
"### Transcribing your own audio files (in Korean)",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.\n\n\nParallel Inference on a Batch\n-----------------------------\n\n\nPlease, see this Colab notebook on using the pretrained model",
"### Training\n\n\nThe model was trained with SpeechBrain (Commit hash: '4b3bf60').\nTo train it from scratch follow these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) at the subdirectories.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nCiting the model\n================\n\n\nCiting KsponSpeech dataset\n=========================="
]
| [
"TAGS\n#speechbrain #ASR #CTC #Attention #Conformer #pytorch #kr #dataset-ksponspeech #arxiv-2106.04624 #license-apache-2.0 #region-us \n",
"### Transcribing your own audio files (in Korean)",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.\n\n\nParallel Inference on a Batch\n-----------------------------\n\n\nPlease, see this Colab notebook on using the pretrained model",
"### Training\n\n\nThe model was trained with SpeechBrain (Commit hash: '4b3bf60').\nTo train it from scratch follow these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) at the subdirectories.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nCiting the model\n================\n\n\nCiting KsponSpeech dataset\n=========================="
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v5 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:28:54+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v5 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:29:10+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
text-generation | transformers |
First version of the fine tuned llama 2 13B.
Trained on Sinergia's workstation (mounting a Nvidia RTX 4080 super).
Train configuration:
-----
model_name: "NousResearch/Llama-2-13b-chat-hf"
dataset_name: "sinergiaepc/Insta360_pro2_2024_04_05"
new_model: "sinergiaepc/llama2-7b_2024-04-16"
qlora_parameters:
r: 64
lora_alpha: 16
lora_dropout: 0.1
bias: "none"
task_type: "CAUSAL_LM"
bitsandbytes_parameters:
load_in_4bit: true
bnb_4bit_compute_dtype: "float16"
bnb_4bit_quant_type: "nf4"
bnb_4bit_use_double_quant: false
training_arguments:
output_dir: "./results"
num_train_epochs: 20
fp16: false
bf16: false
per_device_train_batch_size: 1
# per_device_eval_batch_size: 1
gradient_accumulation_steps: 16
# gradient_checkpointing: true
max_grad_norm: 0.3
learning_rate: 0.0002
weight_decay: 0.001
optim: "paged_adamw_32bit"
lr_scheduler_type: "cosine"
max_steps: -1
warmup_ratio: 0.03
group_by_length: true
save_steps: 0
logging_steps: 25
# report_to: "tensorboard"
sft_parameters:
max_seq_length: null
packing: false
| {"datasets": ["sinergiaepc/Insta360_pro2_2024_04_05"]} | sinergiaepc/llama2-13b_2024-04-16 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:sinergiaepc/Insta360_pro2_2024_04_05",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:32:16+00:00 | []
| []
| TAGS
#transformers #pytorch #llama #text-generation #dataset-sinergiaepc/Insta360_pro2_2024_04_05 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
First version of the fine tuned llama 2 13B.
Trained on Sinergia's workstation (mounting a Nvidia RTX 4080 super).
Train configuration:
-----
model_name: "NousResearch/Llama-2-13b-chat-hf"
dataset_name: "sinergiaepc/Insta360_pro2_2024_04_05"
new_model: "sinergiaepc/llama2-7b_2024-04-16"
qlora_parameters:
r: 64
lora_alpha: 16
lora_dropout: 0.1
bias: "none"
task_type: "CAUSAL_LM"
bitsandbytes_parameters:
load_in_4bit: true
bnb_4bit_compute_dtype: "float16"
bnb_4bit_quant_type: "nf4"
bnb_4bit_use_double_quant: false
training_arguments:
output_dir: "./results"
num_train_epochs: 20
fp16: false
bf16: false
per_device_train_batch_size: 1
# per_device_eval_batch_size: 1
gradient_accumulation_steps: 16
# gradient_checkpointing: true
max_grad_norm: 0.3
learning_rate: 0.0002
weight_decay: 0.001
optim: "paged_adamw_32bit"
lr_scheduler_type: "cosine"
max_steps: -1
warmup_ratio: 0.03
group_by_length: true
save_steps: 0
logging_steps: 25
# report_to: "tensorboard"
sft_parameters:
max_seq_length: null
packing: false
| [
"# per_device_eval_batch_size: 1\n gradient_accumulation_steps: 16\n # gradient_checkpointing: true\n max_grad_norm: 0.3\n learning_rate: 0.0002\n weight_decay: 0.001\n optim: \"paged_adamw_32bit\"\n lr_scheduler_type: \"cosine\"\n max_steps: -1\n warmup_ratio: 0.03\n group_by_length: true\n save_steps: 0\n logging_steps: 25\n # report_to: \"tensorboard\"\n sft_parameters:\n max_seq_length: null\n packing: false"
]
| [
"TAGS\n#transformers #pytorch #llama #text-generation #dataset-sinergiaepc/Insta360_pro2_2024_04_05 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# per_device_eval_batch_size: 1\n gradient_accumulation_steps: 16\n # gradient_checkpointing: true\n max_grad_norm: 0.3\n learning_rate: 0.0002\n weight_decay: 0.001\n optim: \"paged_adamw_32bit\"\n lr_scheduler_type: \"cosine\"\n max_steps: -1\n warmup_ratio: 0.03\n group_by_length: true\n save_steps: 0\n logging_steps: 25\n # report_to: \"tensorboard\"\n sft_parameters:\n max_seq_length: null\n packing: false"
]
|
null | null | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/codegemma-7b-it-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/codegemma-7b-it-GGUF-smashed-smashed codegemma-7b-it.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/codegemma-7b-it-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/codegemma-7b-it-GGUF-smashed-smashed codegemma-7b-it.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m codegemma-7b-it.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./codegemma-7b-it.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./codegemma-7b-it.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
| {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/codegemma-7b-it-GGUF-smashed | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-04-16T12:32:26+00:00 | []
| []
| TAGS
#gguf #pruna-ai #region-us
|
[](URL target=)
:
* Step 1: We recommend using the 'huggingface-hub' Python library:
* Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this:
More advanced huggingface-cli download usage (click to read)
Alternatively, you can also download multiple files at once with a pattern:
For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer':
And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1':
Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command.
How to run model in GGUF format?
--------------------------------
* Option A - Introductory example with 'URL' command
Make sure you are using 'URL' from commit d0cee0d or later.
Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins'
For other parameters and how to use them, please refer to the URL documentation
* Option B - Running in 'text-generation-webui'
Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.
* Option C - Running from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
```
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: llama-cpp-python docs.
#### First install the package
Run one of the following commands, according to your system:
#### Simple llama-cpp-python example code
```
* Option D - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* LangChain + llama-cpp-python
* LangChain + ctransformers
Configurations
--------------
The configuration info are in 'smash\_config.json'.
Credits & License
-----------------
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
Want to compress other models?
------------------------------
* Contact us and tell us which model to compress next here.
* Request access to easily compress your own AI models here.
| [
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
]
| [
"TAGS\n#gguf #pruna-ai #region-us \n",
"### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.",
"#### First install the package\n\nRun one of the following commands, according to your system:",
"#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here."
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0156
- F1 Score: 0.7200
- Accuracy: 0.723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.62 | 20.0 | 200 | 0.5978 | 0.6991 | 0.7 |
| 0.4708 | 40.0 | 400 | 0.6380 | 0.6949 | 0.699 |
| 0.3853 | 60.0 | 600 | 0.5452 | 0.7745 | 0.775 |
| 0.3257 | 80.0 | 800 | 0.5779 | 0.7765 | 0.778 |
| 0.2883 | 100.0 | 1000 | 0.5605 | 0.7937 | 0.794 |
| 0.2589 | 120.0 | 1200 | 0.5576 | 0.7954 | 0.796 |
| 0.2352 | 140.0 | 1400 | 0.5389 | 0.8079 | 0.808 |
| 0.219 | 160.0 | 1600 | 0.5669 | 0.8167 | 0.817 |
| 0.2 | 180.0 | 1800 | 0.6209 | 0.8105 | 0.811 |
| 0.1885 | 200.0 | 2000 | 0.6233 | 0.8340 | 0.834 |
| 0.1784 | 220.0 | 2200 | 0.6276 | 0.8235 | 0.824 |
| 0.1666 | 240.0 | 2400 | 0.6487 | 0.8175 | 0.818 |
| 0.1577 | 260.0 | 2600 | 0.6007 | 0.8227 | 0.823 |
| 0.152 | 280.0 | 2800 | 0.6748 | 0.8078 | 0.809 |
| 0.1437 | 300.0 | 3000 | 0.6554 | 0.8239 | 0.824 |
| 0.1389 | 320.0 | 3200 | 0.6556 | 0.8238 | 0.824 |
| 0.1346 | 340.0 | 3400 | 0.6645 | 0.8186 | 0.819 |
| 0.127 | 360.0 | 3600 | 0.6732 | 0.8175 | 0.818 |
| 0.1237 | 380.0 | 3800 | 0.6742 | 0.8145 | 0.815 |
| 0.1176 | 400.0 | 4000 | 0.7171 | 0.8237 | 0.824 |
| 0.1139 | 420.0 | 4200 | 0.7174 | 0.8172 | 0.818 |
| 0.1086 | 440.0 | 4400 | 0.6853 | 0.8208 | 0.821 |
| 0.1055 | 460.0 | 4600 | 0.7398 | 0.8136 | 0.814 |
| 0.1029 | 480.0 | 4800 | 0.7304 | 0.8218 | 0.822 |
| 0.0997 | 500.0 | 5000 | 0.7621 | 0.8126 | 0.813 |
| 0.0954 | 520.0 | 5200 | 0.7104 | 0.8197 | 0.82 |
| 0.0929 | 540.0 | 5400 | 0.7762 | 0.8207 | 0.821 |
| 0.0907 | 560.0 | 5600 | 0.7735 | 0.8177 | 0.818 |
| 0.0878 | 580.0 | 5800 | 0.7543 | 0.8197 | 0.82 |
| 0.0856 | 600.0 | 6000 | 0.7789 | 0.8178 | 0.818 |
| 0.0826 | 620.0 | 6200 | 0.8159 | 0.8216 | 0.822 |
| 0.0821 | 640.0 | 6400 | 0.7643 | 0.8289 | 0.829 |
| 0.0792 | 660.0 | 6600 | 0.7688 | 0.8178 | 0.818 |
| 0.0766 | 680.0 | 6800 | 0.7795 | 0.8278 | 0.828 |
| 0.0753 | 700.0 | 7000 | 0.8025 | 0.8217 | 0.822 |
| 0.0742 | 720.0 | 7200 | 0.8226 | 0.8155 | 0.816 |
| 0.0722 | 740.0 | 7400 | 0.8351 | 0.8207 | 0.821 |
| 0.0716 | 760.0 | 7600 | 0.8350 | 0.8176 | 0.818 |
| 0.0699 | 780.0 | 7800 | 0.8172 | 0.8229 | 0.823 |
| 0.0688 | 800.0 | 8000 | 0.8325 | 0.8227 | 0.823 |
| 0.0678 | 820.0 | 8200 | 0.8156 | 0.8278 | 0.828 |
| 0.0665 | 840.0 | 8400 | 0.8068 | 0.8238 | 0.824 |
| 0.0645 | 860.0 | 8600 | 0.8468 | 0.8207 | 0.821 |
| 0.0649 | 880.0 | 8800 | 0.8415 | 0.8197 | 0.82 |
| 0.0641 | 900.0 | 9000 | 0.8521 | 0.8176 | 0.818 |
| 0.063 | 920.0 | 9200 | 0.8559 | 0.8188 | 0.819 |
| 0.0623 | 940.0 | 9400 | 0.8541 | 0.8136 | 0.814 |
| 0.062 | 960.0 | 9600 | 0.8392 | 0.8177 | 0.818 |
| 0.061 | 980.0 | 9800 | 0.8554 | 0.8166 | 0.817 |
| 0.0605 | 1000.0 | 10000 | 0.8533 | 0.8167 | 0.817 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:32:42+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_4-seqsight\_16384\_512\_34M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0156
* F1 Score: 0.7200
* Accuracy: 0.723
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | arianhosseini/sample_gen | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:33:18+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | arianhosseini/sample_ver | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:33:39+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-car0003-test0.2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0052
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0692 | 1.0 | 56 | 0.0052 | 1.0 |
| 0.0179 | 1.99 | 112 | 0.0015 | 1.0 |
| 0.0163 | 2.99 | 168 | 0.0001 | 1.0 |
| 0.0175 | 4.0 | 225 | 0.0000 | 1.0 |
| 0.0139 | 4.98 | 280 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-car0003-test0.2", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | tsware/swin-tiny-patch4-window7-224-finetuned-car0003-test0.2 | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:34:17+00:00 | []
| []
| TAGS
#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| swin-tiny-patch4-window7-224-finetuned-car0003-test0.2
======================================================
This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0052
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kalai_fine_tuned_model_3
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2730
- Accuracy: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 28 | 0.4571 | 0.8739 |
| No log | 2.0 | 56 | 0.2730 | 0.9369 |
| No log | 3.0 | 84 | 0.3916 | 0.9189 |
| No log | 4.0 | 112 | 0.3873 | 0.9189 |
| No log | 5.0 | 140 | 0.4011 | 0.9279 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "albert/albert-base-v2", "model-index": [{"name": "kalai_fine_tuned_model_3", "results": []}]} | KalaiselvanD/kalai_fine_tuned_model_3 | null | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:34:39+00:00 | []
| []
| TAGS
#transformers #safetensors #albert #text-classification #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| kalai\_fine\_tuned\_model\_3
============================
This model is a fine-tuned version of albert/albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2730
* Accuracy: 0.9369
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #safetensors #albert #text-classification #generated_from_trainer #base_model-albert/albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# mamba-2.8b-GGUF
Quantized mamba-2.8b models using recent versions of llama.cpp.
| {"library_name": "transformers", "model_name": "mamba-2.8b", "base_model": "state-spaces/mamba-2.8b-hf", "pipeline_tag": "text-generation", "model_creator": "state-spaces", "model_type": "MambaForCausalLM", "inference": false} | jpodivin/mamba-2.8b-hf-GGUF | null | [
"transformers",
"gguf",
"text-generation",
"base_model:state-spaces/mamba-2.8b-hf",
"region:us"
]
| null | 2024-04-16T12:35:29+00:00 | []
| []
| TAGS
#transformers #gguf #text-generation #base_model-state-spaces/mamba-2.8b-hf #region-us
|
# mamba-2.8b-GGUF
Quantized mamba-2.8b models using recent versions of URL.
| [
"# mamba-2.8b-GGUF\n\nQuantized mamba-2.8b models using recent versions of URL."
]
| [
"TAGS\n#transformers #gguf #text-generation #base_model-state-spaces/mamba-2.8b-hf #region-us \n",
"# mamba-2.8b-GGUF\n\nQuantized mamba-2.8b models using recent versions of URL."
]
|
text-generation | transformers |
# twewy DialogGPT Model | {"language": ["en"], "license": "mit", "tags": ["conversational", "text-generation-inference", "gpt"], "pipeline_tag": "text-generation"} | dorothylilian/Joshua-Twewy-DialogGPT | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"text-generation-inference",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:36:45+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #gpt2 #text-generation #conversational #text-generation-inference #gpt #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# twewy DialogGPT Model | [
"# twewy DialogGPT Model"
]
| [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #conversational #text-generation-inference #gpt #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# twewy DialogGPT Model"
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v6 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:36:51+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v6 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T12:37:05+00:00 | []
| [
"en"
]
| TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7878
- F1 Score: 0.6185
- Accuracy: 0.62
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6671 | 14.29 | 200 | 0.6441 | 0.6256 | 0.632 |
| 0.6064 | 28.57 | 400 | 0.6588 | 0.6189 | 0.621 |
| 0.5606 | 42.86 | 600 | 0.6881 | 0.6240 | 0.624 |
| 0.5202 | 57.14 | 800 | 0.6994 | 0.6257 | 0.626 |
| 0.4906 | 71.43 | 1000 | 0.7170 | 0.6204 | 0.621 |
| 0.4659 | 85.71 | 1200 | 0.7241 | 0.6342 | 0.635 |
| 0.4489 | 100.0 | 1400 | 0.7409 | 0.6196 | 0.62 |
| 0.4304 | 114.29 | 1600 | 0.7892 | 0.6311 | 0.631 |
| 0.4164 | 128.57 | 1800 | 0.7860 | 0.6271 | 0.627 |
| 0.4043 | 142.86 | 2000 | 0.8401 | 0.6059 | 0.607 |
| 0.3911 | 157.14 | 2200 | 0.7920 | 0.6148 | 0.615 |
| 0.381 | 171.43 | 2400 | 0.8595 | 0.6208 | 0.621 |
| 0.3707 | 185.71 | 2600 | 0.8268 | 0.6201 | 0.62 |
| 0.3593 | 200.0 | 2800 | 0.8275 | 0.6171 | 0.622 |
| 0.3491 | 214.29 | 3000 | 0.8034 | 0.6300 | 0.631 |
| 0.3392 | 228.57 | 3200 | 0.8760 | 0.6289 | 0.63 |
| 0.3315 | 242.86 | 3400 | 0.8548 | 0.6211 | 0.621 |
| 0.3219 | 257.14 | 3600 | 0.8892 | 0.6326 | 0.633 |
| 0.3126 | 271.43 | 3800 | 0.9051 | 0.6281 | 0.628 |
| 0.3044 | 285.71 | 4000 | 0.8859 | 0.6190 | 0.619 |
| 0.2984 | 300.0 | 4200 | 0.9240 | 0.6125 | 0.614 |
| 0.2896 | 314.29 | 4400 | 0.9343 | 0.6138 | 0.614 |
| 0.2829 | 328.57 | 4600 | 0.9409 | 0.6192 | 0.62 |
| 0.2754 | 342.86 | 4800 | 0.9154 | 0.6231 | 0.623 |
| 0.2715 | 357.14 | 5000 | 0.9546 | 0.6266 | 0.627 |
| 0.2641 | 371.43 | 5200 | 0.9488 | 0.6141 | 0.614 |
| 0.2593 | 385.71 | 5400 | 0.9697 | 0.6101 | 0.61 |
| 0.2519 | 400.0 | 5600 | 0.9611 | 0.6210 | 0.621 |
| 0.2468 | 414.29 | 5800 | 1.0315 | 0.6161 | 0.616 |
| 0.2407 | 428.57 | 6000 | 1.0107 | 0.6167 | 0.617 |
| 0.2381 | 442.86 | 6200 | 0.9996 | 0.6159 | 0.616 |
| 0.2321 | 457.14 | 6400 | 1.0260 | 0.6221 | 0.622 |
| 0.2269 | 471.43 | 6600 | 1.0366 | 0.6231 | 0.623 |
| 0.2246 | 485.71 | 6800 | 1.0313 | 0.6208 | 0.621 |
| 0.2222 | 500.0 | 7000 | 1.0584 | 0.6251 | 0.625 |
| 0.2172 | 514.29 | 7200 | 1.0710 | 0.6220 | 0.622 |
| 0.2142 | 528.57 | 7400 | 1.0519 | 0.6159 | 0.616 |
| 0.2111 | 542.86 | 7600 | 1.0756 | 0.6131 | 0.613 |
| 0.208 | 557.14 | 7800 | 1.0759 | 0.6228 | 0.623 |
| 0.2043 | 571.43 | 8000 | 1.0728 | 0.6151 | 0.615 |
| 0.2032 | 585.71 | 8200 | 1.0778 | 0.6201 | 0.62 |
| 0.2005 | 600.0 | 8400 | 1.0698 | 0.6140 | 0.614 |
| 0.1984 | 614.29 | 8600 | 1.0773 | 0.6201 | 0.62 |
| 0.1977 | 628.57 | 8800 | 1.0857 | 0.6181 | 0.618 |
| 0.1957 | 642.86 | 9000 | 1.0819 | 0.6161 | 0.616 |
| 0.196 | 657.14 | 9200 | 1.0890 | 0.6151 | 0.615 |
| 0.1929 | 671.43 | 9400 | 1.1098 | 0.6101 | 0.61 |
| 0.1919 | 685.71 | 9600 | 1.1006 | 0.6101 | 0.61 |
| 0.1929 | 700.0 | 9800 | 1.0929 | 0.6121 | 0.612 |
| 0.1918 | 714.29 | 10000 | 1.0952 | 0.6111 | 0.611 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:38:28+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_3-seqsight\_16384\_512\_34M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7878
* F1 Score: 0.6185
* Accuracy: 0.62
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# Uploaded model
- **Developed by:** khursani8
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl", "sft"], "base_model": "unsloth/gemma-2b-bnb-4bit"} | khursani8/model | null | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
]
| null | 2024-04-16T12:38:57+00:00 | []
| [
"en"
]
| TAGS
#transformers #pytorch #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #8-bit #region-us
|
# Uploaded model
- Developed by: khursani8
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: khursani8\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
| [
"TAGS\n#transformers #pytorch #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n",
"# Uploaded model\n\n- Developed by: khursani8\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
]
|
null | null |
# cybercheems2077/mistral-maths7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`feeltheAGI/mistral-maths7B`](https://huggingface.co/feeltheAGI/mistral-maths7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/feeltheAGI/mistral-maths7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cybercheems2077/mistral-maths7B-Q4_K_M-GGUF --model mistral-maths7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cybercheems2077/mistral-maths7B-Q4_K_M-GGUF --model mistral-maths7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-maths7b.Q4_K_M.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["microsoft/orca-math-word-problems-200k"]} | cybercheems2077/mistral-maths7B-Q4_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:microsoft/orca-math-word-problems-200k",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:42:41+00:00 | []
| []
| TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-microsoft/orca-math-word-problems-200k #license-apache-2.0 #region-us
|
# cybercheems2077/mistral-maths7B-Q4_K_M-GGUF
This model was converted to GGUF format from 'feeltheAGI/mistral-maths7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# cybercheems2077/mistral-maths7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'feeltheAGI/mistral-maths7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
| [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-microsoft/orca-math-word-problems-200k #license-apache-2.0 #region-us \n",
"# cybercheems2077/mistral-maths7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'feeltheAGI/mistral-maths7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_shp4_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3544
- Rewards/chosen: -5.3458
- Rewards/rejected: -4.3841
- Rewards/accuracies: 0.4600
- Rewards/margins: -0.9617
- Logps/rejected: -254.8011
- Logps/chosen: -257.0686
- Logits/rejected: -0.5056
- Logits/chosen: -0.4831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0182 | 2.67 | 100 | 1.9417 | 0.0715 | 0.3404 | 0.4200 | -0.2689 | -249.5517 | -251.0494 | -0.5768 | -0.5403 |
| 0.0554 | 5.33 | 200 | 3.7620 | -6.0611 | -5.3910 | 0.4500 | -0.6701 | -255.9199 | -257.8634 | -0.5245 | -0.4755 |
| 0.0475 | 8.0 | 300 | 5.9980 | -19.5304 | -17.4313 | 0.4600 | -2.0991 | -269.2980 | -272.8293 | -0.7793 | -0.7367 |
| 0.0009 | 10.67 | 400 | 4.6897 | 0.9315 | 2.1688 | 0.4300 | -1.2373 | -247.5201 | -250.0939 | -0.5994 | -0.5636 |
| 0.0 | 13.33 | 500 | 4.3775 | -5.3412 | -4.3620 | 0.4600 | -0.9792 | -254.7766 | -257.0635 | -0.5053 | -0.4827 |
| 0.0 | 16.0 | 600 | 4.3248 | -5.2899 | -4.3683 | 0.4600 | -0.9216 | -254.7836 | -257.0065 | -0.5057 | -0.4828 |
| 0.0 | 18.67 | 700 | 4.3972 | -5.3148 | -4.3036 | 0.4600 | -1.0112 | -254.7117 | -257.0341 | -0.5053 | -0.4828 |
| 0.0 | 21.33 | 800 | 4.3504 | -5.3214 | -4.3905 | 0.4500 | -0.9309 | -254.8082 | -257.0416 | -0.5056 | -0.4828 |
| 0.0 | 24.0 | 900 | 4.3824 | -5.3434 | -4.3767 | 0.4600 | -0.9667 | -254.7929 | -257.0659 | -0.5055 | -0.4829 |
| 0.0 | 26.67 | 1000 | 4.3544 | -5.3458 | -4.3841 | 0.4600 | -0.9617 | -254.8011 | -257.0686 | -0.5056 | -0.4831 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp4_dpo9", "results": []}]} | guoyu-zhang/model_hh_shp4_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2024-04-16T12:43:14+00:00 | []
| []
| TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_shp4\_dpo9
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.3544
* Rewards/chosen: -5.3458
* Rewards/rejected: -4.3841
* Rewards/accuracies: 0.4600
* Rewards/margins: -0.9617
* Logps/rejected: -254.8011
* Logps/chosen: -257.0686
* Logits/rejected: -0.5056
* Logits/chosen: -0.4831
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/gemma-7b-bnb-4bit](https://huggingface.co/unsloth/gemma-7b-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer", "unsloth", "unsloth", "unsloth", "unsloth", "unsloth", "unsloth"], "base_model": "unsloth/gemma-7b-bnb-4bit", "model-index": [{"name": "outputs", "results": []}]} | deepakdevfocaloid/outputs | null | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"4-bit",
"region:us"
]
| null | 2024-04-16T12:46:51+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #gemma #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #4-bit #region-us
|
# outputs
This model is a fine-tuned version of unsloth/gemma-7b-bnb-4bit on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# outputs\n\nThis model is a fine-tuned version of unsloth/gemma-7b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #gemma #trl #sft #unsloth #generated_from_trainer #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #4-bit #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of unsloth/gemma-7b-bnb-4bit on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 | {"library_name": "peft", "base_model": "tiiuae/falcon-7b"} | PrahmodhRaj/Falcon_CN_Finetuned | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"region:us"
]
| null | 2024-04-16T12:49:21+00:00 | [
"1910.09700"
]
| []
| TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-tiiuae/falcon-7b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
]
| [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-tiiuae/falcon-7b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
]
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pijarcandra22/NMTIndoBaliT5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0490
- Validation Loss: 2.6202
- Epoch: 498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2881 | 2.6852 | 0 |
| 2.7514 | 2.4004 | 1 |
| 2.5012 | 2.2171 | 2 |
| 2.3252 | 2.0959 | 3 |
| 2.1930 | 1.9901 | 4 |
| 2.0837 | 1.9130 | 5 |
| 1.9912 | 1.8452 | 6 |
| 1.9107 | 1.7974 | 7 |
| 1.8459 | 1.7521 | 8 |
| 1.7902 | 1.7165 | 9 |
| 1.7321 | 1.6842 | 10 |
| 1.6811 | 1.6400 | 11 |
| 1.6374 | 1.6230 | 12 |
| 1.5973 | 1.5960 | 13 |
| 1.5588 | 1.5765 | 14 |
| 1.5244 | 1.5589 | 15 |
| 1.4933 | 1.5370 | 16 |
| 1.4588 | 1.5300 | 17 |
| 1.4325 | 1.5107 | 18 |
| 1.4054 | 1.4970 | 19 |
| 1.3730 | 1.4839 | 20 |
| 1.3475 | 1.4789 | 21 |
| 1.3231 | 1.4616 | 22 |
| 1.3035 | 1.4568 | 23 |
| 1.2768 | 1.4489 | 24 |
| 1.2587 | 1.4396 | 25 |
| 1.2380 | 1.4364 | 26 |
| 1.2208 | 1.4273 | 27 |
| 1.2026 | 1.4228 | 28 |
| 1.1755 | 1.4141 | 29 |
| 1.1614 | 1.4062 | 30 |
| 1.1460 | 1.4060 | 31 |
| 1.1289 | 1.3934 | 32 |
| 1.1134 | 1.4007 | 33 |
| 1.0965 | 1.3927 | 34 |
| 1.0818 | 1.3874 | 35 |
| 1.0661 | 1.3921 | 36 |
| 1.0482 | 1.3795 | 37 |
| 1.0345 | 1.3853 | 38 |
| 1.0195 | 1.3835 | 39 |
| 1.0074 | 1.3772 | 40 |
| 0.9890 | 1.3851 | 41 |
| 0.9833 | 1.3724 | 42 |
| 0.9667 | 1.3740 | 43 |
| 0.9561 | 1.3752 | 44 |
| 0.9429 | 1.3673 | 45 |
| 0.9301 | 1.3828 | 46 |
| 0.9141 | 1.3806 | 47 |
| 0.9050 | 1.3772 | 48 |
| 0.8952 | 1.3812 | 49 |
| 0.8809 | 1.3718 | 50 |
| 0.8725 | 1.3825 | 51 |
| 0.8601 | 1.3842 | 52 |
| 0.8488 | 1.3827 | 53 |
| 0.8375 | 1.3920 | 54 |
| 0.8257 | 1.3936 | 55 |
| 0.8184 | 1.3842 | 56 |
| 0.8081 | 1.3846 | 57 |
| 0.7986 | 1.3860 | 58 |
| 0.7883 | 1.3943 | 59 |
| 0.7787 | 1.4004 | 60 |
| 0.7666 | 1.4071 | 61 |
| 0.7554 | 1.4079 | 62 |
| 0.7470 | 1.4038 | 63 |
| 0.7366 | 1.4141 | 64 |
| 0.7279 | 1.4135 | 65 |
| 0.7250 | 1.4111 | 66 |
| 0.7128 | 1.4196 | 67 |
| 0.7042 | 1.4182 | 68 |
| 0.6946 | 1.4378 | 69 |
| 0.6851 | 1.4350 | 70 |
| 0.6764 | 1.4403 | 71 |
| 0.6695 | 1.4474 | 72 |
| 0.6606 | 1.4454 | 73 |
| 0.6565 | 1.4516 | 74 |
| 0.6450 | 1.4595 | 75 |
| 0.6347 | 1.4700 | 76 |
| 0.6287 | 1.4746 | 77 |
| 0.6183 | 1.4813 | 78 |
| 0.6143 | 1.4785 | 79 |
| 0.6053 | 1.4848 | 80 |
| 0.5994 | 1.4777 | 81 |
| 0.5903 | 1.4962 | 82 |
| 0.5828 | 1.5102 | 83 |
| 0.5760 | 1.4957 | 84 |
| 0.5696 | 1.5121 | 85 |
| 0.5637 | 1.5168 | 86 |
| 0.5578 | 1.5183 | 87 |
| 0.5499 | 1.5184 | 88 |
| 0.5396 | 1.5433 | 89 |
| 0.5345 | 1.5411 | 90 |
| 0.5268 | 1.5338 | 91 |
| 0.5220 | 1.5556 | 92 |
| 0.5184 | 1.5489 | 93 |
| 0.5122 | 1.5635 | 94 |
| 0.5014 | 1.5674 | 95 |
| 0.4921 | 1.5773 | 96 |
| 0.4925 | 1.5773 | 97 |
| 0.4821 | 1.5938 | 98 |
| 0.4769 | 1.6013 | 99 |
| 0.4723 | 1.5979 | 100 |
| 0.4692 | 1.6131 | 101 |
| 0.4603 | 1.6247 | 102 |
| 0.4553 | 1.6276 | 103 |
| 0.4476 | 1.6376 | 104 |
| 0.4401 | 1.6390 | 105 |
| 0.4384 | 1.6442 | 106 |
| 0.4305 | 1.6548 | 107 |
| 0.4263 | 1.6617 | 108 |
| 0.4232 | 1.6523 | 109 |
| 0.4185 | 1.6561 | 110 |
| 0.4129 | 1.6779 | 111 |
| 0.4036 | 1.6897 | 112 |
| 0.4005 | 1.6873 | 113 |
| 0.3948 | 1.6987 | 114 |
| 0.3892 | 1.7120 | 115 |
| 0.3859 | 1.7049 | 116 |
| 0.3795 | 1.7241 | 117 |
| 0.3802 | 1.7273 | 118 |
| 0.3731 | 1.7387 | 119 |
| 0.3672 | 1.7447 | 120 |
| 0.3629 | 1.7513 | 121 |
| 0.3607 | 1.7515 | 122 |
| 0.3543 | 1.7585 | 123 |
| 0.3504 | 1.7601 | 124 |
| 0.3477 | 1.7657 | 125 |
| 0.3453 | 1.7733 | 126 |
| 0.3448 | 1.7718 | 127 |
| 0.3390 | 1.7971 | 128 |
| 0.3352 | 1.7929 | 129 |
| 0.3273 | 1.7988 | 130 |
| 0.3250 | 1.8192 | 131 |
| 0.3222 | 1.8220 | 132 |
| 0.3173 | 1.8289 | 133 |
| 0.3171 | 1.8261 | 134 |
| 0.3124 | 1.8415 | 135 |
| 0.3040 | 1.8379 | 136 |
| 0.3040 | 1.8533 | 137 |
| 0.3030 | 1.8511 | 138 |
| 0.2970 | 1.8537 | 139 |
| 0.2938 | 1.8697 | 140 |
| 0.2929 | 1.8730 | 141 |
| 0.2892 | 1.8632 | 142 |
| 0.2816 | 1.8796 | 143 |
| 0.2812 | 1.8870 | 144 |
| 0.2761 | 1.8891 | 145 |
| 0.2731 | 1.9134 | 146 |
| 0.2698 | 1.9100 | 147 |
| 0.2671 | 1.9207 | 148 |
| 0.2639 | 1.9196 | 149 |
| 0.2621 | 1.9130 | 150 |
| 0.2589 | 1.9273 | 151 |
| 0.2558 | 1.9336 | 152 |
| 0.2545 | 1.9355 | 153 |
| 0.2487 | 1.9551 | 154 |
| 0.2493 | 1.9573 | 155 |
| 0.2449 | 1.9552 | 156 |
| 0.2421 | 1.9591 | 157 |
| 0.2405 | 1.9556 | 158 |
| 0.2367 | 1.9807 | 159 |
| 0.2342 | 1.9859 | 160 |
| 0.2316 | 1.9803 | 161 |
| 0.2281 | 1.9853 | 162 |
| 0.2269 | 1.9970 | 163 |
| 0.2250 | 2.0120 | 164 |
| 0.2236 | 2.0107 | 165 |
| 0.2194 | 2.0208 | 166 |
| 0.2183 | 2.0198 | 167 |
| 0.2168 | 2.0265 | 168 |
| 0.2172 | 2.0278 | 169 |
| 0.2117 | 2.0380 | 170 |
| 0.2078 | 2.0448 | 171 |
| 0.2091 | 2.0415 | 172 |
| 0.2065 | 2.0459 | 173 |
| 0.2027 | 2.0597 | 174 |
| 0.1995 | 2.0659 | 175 |
| 0.1980 | 2.0811 | 176 |
| 0.1971 | 2.0704 | 177 |
| 0.1932 | 2.0785 | 178 |
| 0.1892 | 2.0783 | 179 |
| 0.1924 | 2.0742 | 180 |
| 0.1872 | 2.0979 | 181 |
| 0.1858 | 2.0958 | 182 |
| 0.1853 | 2.1005 | 183 |
| 0.1834 | 2.1166 | 184 |
| 0.1810 | 2.1027 | 185 |
| 0.1789 | 2.1151 | 186 |
| 0.1768 | 2.1302 | 187 |
| 0.1768 | 2.1200 | 188 |
| 0.1766 | 2.1399 | 189 |
| 0.1732 | 2.1196 | 190 |
| 0.1719 | 2.1362 | 191 |
| 0.1697 | 2.1447 | 192 |
| 0.1684 | 2.1464 | 193 |
| 0.1699 | 2.1442 | 194 |
| 0.1657 | 2.1492 | 195 |
| 0.1607 | 2.1644 | 196 |
| 0.1603 | 2.1667 | 197 |
| 0.1580 | 2.1715 | 198 |
| 0.1588 | 2.1818 | 199 |
| 0.1551 | 2.1825 | 200 |
| 0.1572 | 2.1779 | 201 |
| 0.1552 | 2.1842 | 202 |
| 0.1528 | 2.2038 | 203 |
| 0.1530 | 2.1941 | 204 |
| 0.1501 | 2.1903 | 205 |
| 0.1492 | 2.2089 | 206 |
| 0.1498 | 2.1871 | 207 |
| 0.1481 | 2.1888 | 208 |
| 0.1486 | 2.2130 | 209 |
| 0.1434 | 2.2259 | 210 |
| 0.1432 | 2.2159 | 211 |
| 0.1436 | 2.2151 | 212 |
| 0.1411 | 2.2221 | 213 |
| 0.1414 | 2.2294 | 214 |
| 0.1381 | 2.2310 | 215 |
| 0.1360 | 2.2444 | 216 |
| 0.1353 | 2.2427 | 217 |
| 0.1372 | 2.2461 | 218 |
| 0.1350 | 2.2455 | 219 |
| 0.1319 | 2.2616 | 220 |
| 0.1345 | 2.2556 | 221 |
| 0.1319 | 2.2567 | 222 |
| 0.1301 | 2.2589 | 223 |
| 0.1273 | 2.2709 | 224 |
| 0.1266 | 2.2737 | 225 |
| 0.1251 | 2.2794 | 226 |
| 0.1255 | 2.2707 | 227 |
| 0.1264 | 2.2903 | 228 |
| 0.1252 | 2.2681 | 229 |
| 0.1229 | 2.2939 | 230 |
| 0.1217 | 2.2889 | 231 |
| 0.1214 | 2.2855 | 232 |
| 0.1195 | 2.3005 | 233 |
| 0.1196 | 2.3030 | 234 |
| 0.1200 | 2.3065 | 235 |
| 0.1176 | 2.2957 | 236 |
| 0.1183 | 2.2850 | 237 |
| 0.1173 | 2.3067 | 238 |
| 0.1158 | 2.3098 | 239 |
| 0.1175 | 2.3070 | 240 |
| 0.1144 | 2.3091 | 241 |
| 0.1113 | 2.3286 | 242 |
| 0.1112 | 2.3344 | 243 |
| 0.1122 | 2.3201 | 244 |
| 0.1112 | 2.3277 | 245 |
| 0.1103 | 2.3282 | 246 |
| 0.1074 | 2.3500 | 247 |
| 0.1098 | 2.3347 | 248 |
| 0.1096 | 2.3363 | 249 |
| 0.1063 | 2.3397 | 250 |
| 0.1053 | 2.3460 | 251 |
| 0.1077 | 2.3321 | 252 |
| 0.1055 | 2.3546 | 253 |
| 0.1053 | 2.3340 | 254 |
| 0.1041 | 2.3378 | 255 |
| 0.1027 | 2.3657 | 256 |
| 0.1030 | 2.3373 | 257 |
| 0.1018 | 2.3576 | 258 |
| 0.1040 | 2.3498 | 259 |
| 0.1010 | 2.3487 | 260 |
| 0.1011 | 2.3558 | 261 |
| 0.0999 | 2.3610 | 262 |
| 0.0996 | 2.3547 | 263 |
| 0.0989 | 2.3651 | 264 |
| 0.0987 | 2.3588 | 265 |
| 0.1003 | 2.3488 | 266 |
| 0.0966 | 2.3740 | 267 |
| 0.0973 | 2.3670 | 268 |
| 0.0980 | 2.3540 | 269 |
| 0.0977 | 2.3531 | 270 |
| 0.0956 | 2.3516 | 271 |
| 0.0940 | 2.3640 | 272 |
| 0.0941 | 2.3609 | 273 |
| 0.0933 | 2.3583 | 274 |
| 0.0954 | 2.3766 | 275 |
| 0.0905 | 2.3796 | 276 |
| 0.0931 | 2.3734 | 277 |
| 0.0924 | 2.3788 | 278 |
| 0.0897 | 2.3839 | 279 |
| 0.0900 | 2.3819 | 280 |
| 0.0900 | 2.3771 | 281 |
| 0.0913 | 2.3619 | 282 |
| 0.0888 | 2.3731 | 283 |
| 0.0901 | 2.3813 | 284 |
| 0.0877 | 2.3956 | 285 |
| 0.0882 | 2.3754 | 286 |
| 0.0874 | 2.3767 | 287 |
| 0.0862 | 2.3913 | 288 |
| 0.0877 | 2.3835 | 289 |
| 0.0864 | 2.4017 | 290 |
| 0.0858 | 2.4085 | 291 |
| 0.0863 | 2.4105 | 292 |
| 0.0858 | 2.4059 | 293 |
| 0.0865 | 2.3823 | 294 |
| 0.0843 | 2.4068 | 295 |
| 0.0849 | 2.4148 | 296 |
| 0.0838 | 2.4138 | 297 |
| 0.0837 | 2.4177 | 298 |
| 0.0824 | 2.4125 | 299 |
| 0.0830 | 2.3931 | 300 |
| 0.0827 | 2.4092 | 301 |
| 0.0840 | 2.4185 | 302 |
| 0.0835 | 2.4079 | 303 |
| 0.0814 | 2.4121 | 304 |
| 0.0820 | 2.4149 | 305 |
| 0.0811 | 2.3981 | 306 |
| 0.0815 | 2.4207 | 307 |
| 0.0795 | 2.4305 | 308 |
| 0.0816 | 2.4200 | 309 |
| 0.0792 | 2.4255 | 310 |
| 0.0803 | 2.4238 | 311 |
| 0.0781 | 2.4316 | 312 |
| 0.0773 | 2.4552 | 313 |
| 0.0777 | 2.4426 | 314 |
| 0.0767 | 2.4411 | 315 |
| 0.0775 | 2.4338 | 316 |
| 0.0774 | 2.4471 | 317 |
| 0.0775 | 2.4411 | 318 |
| 0.0772 | 2.4345 | 319 |
| 0.0767 | 2.4524 | 320 |
| 0.0773 | 2.4268 | 321 |
| 0.0764 | 2.4423 | 322 |
| 0.0763 | 2.4347 | 323 |
| 0.0757 | 2.4518 | 324 |
| 0.0761 | 2.4477 | 325 |
| 0.0742 | 2.4567 | 326 |
| 0.0763 | 2.4599 | 327 |
| 0.0745 | 2.4768 | 328 |
| 0.0751 | 2.4397 | 329 |
| 0.0744 | 2.4510 | 330 |
| 0.0737 | 2.4455 | 331 |
| 0.0747 | 2.4608 | 332 |
| 0.0724 | 2.4727 | 333 |
| 0.0740 | 2.4467 | 334 |
| 0.0739 | 2.4447 | 335 |
| 0.0716 | 2.4674 | 336 |
| 0.0723 | 2.4512 | 337 |
| 0.0726 | 2.4452 | 338 |
| 0.0709 | 2.4469 | 339 |
| 0.0721 | 2.4593 | 340 |
| 0.0719 | 2.4458 | 341 |
| 0.0704 | 2.4783 | 342 |
| 0.0702 | 2.4690 | 343 |
| 0.0720 | 2.4510 | 344 |
| 0.0700 | 2.4665 | 345 |
| 0.0713 | 2.4748 | 346 |
| 0.0693 | 2.4626 | 347 |
| 0.0687 | 2.4665 | 348 |
| 0.0685 | 2.4568 | 349 |
| 0.0692 | 2.4718 | 350 |
| 0.0694 | 2.4751 | 351 |
| 0.0691 | 2.4684 | 352 |
| 0.0684 | 2.4866 | 353 |
| 0.0674 | 2.4946 | 354 |
| 0.0671 | 2.4772 | 355 |
| 0.0674 | 2.4763 | 356 |
| 0.0672 | 2.5013 | 357 |
| 0.0683 | 2.4805 | 358 |
| 0.0675 | 2.4810 | 359 |
| 0.0660 | 2.4837 | 360 |
| 0.0663 | 2.4880 | 361 |
| 0.0659 | 2.4878 | 362 |
| 0.0670 | 2.4878 | 363 |
| 0.0663 | 2.4880 | 364 |
| 0.0649 | 2.4862 | 365 |
| 0.0661 | 2.4902 | 366 |
| 0.0655 | 2.5094 | 367 |
| 0.0645 | 2.5056 | 368 |
| 0.0643 | 2.5108 | 369 |
| 0.0651 | 2.5107 | 370 |
| 0.0645 | 2.5097 | 371 |
| 0.0649 | 2.5055 | 372 |
| 0.0641 | 2.5140 | 373 |
| 0.0648 | 2.5048 | 374 |
| 0.0638 | 2.5043 | 375 |
| 0.0641 | 2.5189 | 376 |
| 0.0648 | 2.5121 | 377 |
| 0.0633 | 2.5016 | 378 |
| 0.0635 | 2.5086 | 379 |
| 0.0630 | 2.5201 | 380 |
| 0.0624 | 2.5168 | 381 |
| 0.0628 | 2.5057 | 382 |
| 0.0625 | 2.5213 | 383 |
| 0.0638 | 2.5116 | 384 |
| 0.0633 | 2.5119 | 385 |
| 0.0629 | 2.5153 | 386 |
| 0.0631 | 2.5124 | 387 |
| 0.0618 | 2.5068 | 388 |
| 0.0618 | 2.5147 | 389 |
| 0.0616 | 2.5187 | 390 |
| 0.0607 | 2.5190 | 391 |
| 0.0609 | 2.5095 | 392 |
| 0.0624 | 2.5009 | 393 |
| 0.0605 | 2.5058 | 394 |
| 0.0623 | 2.5067 | 395 |
| 0.0616 | 2.4963 | 396 |
| 0.0609 | 2.5164 | 397 |
| 0.0600 | 2.5098 | 398 |
| 0.0598 | 2.5210 | 399 |
| 0.0600 | 2.5219 | 400 |
| 0.0601 | 2.5294 | 401 |
| 0.0597 | 2.5104 | 402 |
| 0.0592 | 2.5396 | 403 |
| 0.0593 | 2.5355 | 404 |
| 0.0599 | 2.5125 | 405 |
| 0.0592 | 2.5513 | 406 |
| 0.0595 | 2.5446 | 407 |
| 0.0581 | 2.5417 | 408 |
| 0.0593 | 2.5255 | 409 |
| 0.0597 | 2.5447 | 410 |
| 0.0588 | 2.5475 | 411 |
| 0.0584 | 2.5529 | 412 |
| 0.0576 | 2.5431 | 413 |
| 0.0573 | 2.5441 | 414 |
| 0.0585 | 2.5366 | 415 |
| 0.0571 | 2.5554 | 416 |
| 0.0580 | 2.5337 | 417 |
| 0.0589 | 2.5227 | 418 |
| 0.0582 | 2.5328 | 419 |
| 0.0575 | 2.5512 | 420 |
| 0.0573 | 2.5600 | 421 |
| 0.0578 | 2.5597 | 422 |
| 0.0578 | 2.5589 | 423 |
| 0.0567 | 2.5518 | 424 |
| 0.0574 | 2.5650 | 425 |
| 0.0580 | 2.5462 | 426 |
| 0.0560 | 2.5490 | 427 |
| 0.0558 | 2.5566 | 428 |
| 0.0565 | 2.5489 | 429 |
| 0.0569 | 2.5492 | 430 |
| 0.0564 | 2.5509 | 431 |
| 0.0555 | 2.5484 | 432 |
| 0.0556 | 2.5403 | 433 |
| 0.0549 | 2.5533 | 434 |
| 0.0546 | 2.5606 | 435 |
| 0.0556 | 2.5657 | 436 |
| 0.0554 | 2.5543 | 437 |
| 0.0554 | 2.5780 | 438 |
| 0.0554 | 2.5815 | 439 |
| 0.0546 | 2.5734 | 440 |
| 0.0540 | 2.5661 | 441 |
| 0.0541 | 2.5809 | 442 |
| 0.0537 | 2.5701 | 443 |
| 0.0548 | 2.5641 | 444 |
| 0.0551 | 2.5584 | 445 |
| 0.0544 | 2.5504 | 446 |
| 0.0538 | 2.5745 | 447 |
| 0.0544 | 2.5595 | 448 |
| 0.0550 | 2.5685 | 449 |
| 0.0529 | 2.5680 | 450 |
| 0.0530 | 2.5781 | 451 |
| 0.0530 | 2.5722 | 452 |
| 0.0524 | 2.5818 | 453 |
| 0.0523 | 2.5727 | 454 |
| 0.0530 | 2.5708 | 455 |
| 0.0541 | 2.5882 | 456 |
| 0.0531 | 2.5703 | 457 |
| 0.0531 | 2.5910 | 458 |
| 0.0520 | 2.5712 | 459 |
| 0.0535 | 2.5703 | 460 |
| 0.0523 | 2.5671 | 461 |
| 0.0526 | 2.5926 | 462 |
| 0.0524 | 2.5740 | 463 |
| 0.0525 | 2.5580 | 464 |
| 0.0518 | 2.5777 | 465 |
| 0.0515 | 2.5942 | 466 |
| 0.0521 | 2.5632 | 467 |
| 0.0523 | 2.5658 | 468 |
| 0.0517 | 2.5798 | 469 |
| 0.0521 | 2.5898 | 470 |
| 0.0519 | 2.5733 | 471 |
| 0.0512 | 2.6010 | 472 |
| 0.0518 | 2.5822 | 473 |
| 0.0519 | 2.5942 | 474 |
| 0.0514 | 2.5968 | 475 |
| 0.0511 | 2.5963 | 476 |
| 0.0514 | 2.5924 | 477 |
| 0.0501 | 2.5994 | 478 |
| 0.0510 | 2.5948 | 479 |
| 0.0507 | 2.6069 | 480 |
| 0.0516 | 2.6118 | 481 |
| 0.0506 | 2.6180 | 482 |
| 0.0504 | 2.6209 | 483 |
| 0.0515 | 2.6133 | 484 |
| 0.0503 | 2.6106 | 485 |
| 0.0511 | 2.6082 | 486 |
| 0.0516 | 2.5892 | 487 |
| 0.0508 | 2.5803 | 488 |
| 0.0502 | 2.5887 | 489 |
| 0.0501 | 2.5958 | 490 |
| 0.0500 | 2.6165 | 491 |
| 0.0496 | 2.6172 | 492 |
| 0.0508 | 2.6027 | 493 |
| 0.0502 | 2.6052 | 494 |
| 0.0505 | 2.6160 | 495 |
| 0.0503 | 2.6068 | 496 |
| 0.0502 | 2.6031 | 497 |
| 0.0490 | 2.6202 | 498 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "t5-small", "model-index": [{"name": "pijarcandra22/NMTIndoBaliT5", "results": []}]} | pijarcandra22/NMTIndoBaliT5 | null | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:52:08+00:00 | []
| []
| TAGS
#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| pijarcandra22/NMTIndoBaliT5
===========================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0490
* Validation Loss: 2.6202
* Epoch: 498
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 1e-04, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | m7mdal7aj/fine_tunned_llama_2_13b_chat_OKVQA | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T12:53:44+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 7
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "results", "results": []}]} | amanayush/results | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T12:54:41+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
|
# results
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 7
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 7",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n",
"# results\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 7",
"### Training results",
"### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hananrh/gemma-2b-quotes | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-16T12:55:17+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_16384_512_34M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1053
- F1 Score: 0.6779
- Accuracy: 0.678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6344 | 15.38 | 200 | 0.6441 | 0.6264 | 0.63 |
| 0.5404 | 30.77 | 400 | 0.7001 | 0.6330 | 0.633 |
| 0.473 | 46.15 | 600 | 0.7367 | 0.6369 | 0.641 |
| 0.4201 | 61.54 | 800 | 0.7615 | 0.6444 | 0.646 |
| 0.3795 | 76.92 | 1000 | 0.8477 | 0.6325 | 0.634 |
| 0.3503 | 92.31 | 1200 | 0.8048 | 0.6380 | 0.64 |
| 0.3249 | 107.69 | 1400 | 0.8627 | 0.6410 | 0.641 |
| 0.3086 | 123.08 | 1600 | 0.9055 | 0.6409 | 0.641 |
| 0.2924 | 138.46 | 1800 | 0.9183 | 0.6430 | 0.643 |
| 0.2822 | 153.85 | 2000 | 0.9585 | 0.6430 | 0.643 |
| 0.27 | 169.23 | 2200 | 0.9781 | 0.6265 | 0.628 |
| 0.2596 | 184.62 | 2400 | 0.9521 | 0.6402 | 0.641 |
| 0.2486 | 200.0 | 2600 | 0.9891 | 0.6414 | 0.642 |
| 0.2397 | 215.38 | 2800 | 0.9351 | 0.6389 | 0.639 |
| 0.2306 | 230.77 | 3000 | 0.9703 | 0.6340 | 0.634 |
| 0.2225 | 246.15 | 3200 | 1.0117 | 0.6322 | 0.633 |
| 0.2154 | 261.54 | 3400 | 0.9916 | 0.6368 | 0.637 |
| 0.2088 | 276.92 | 3600 | 1.0980 | 0.6320 | 0.632 |
| 0.1994 | 292.31 | 3800 | 1.0146 | 0.6470 | 0.647 |
| 0.195 | 307.69 | 4000 | 1.0667 | 0.6429 | 0.643 |
| 0.1881 | 323.08 | 4200 | 1.0832 | 0.6359 | 0.636 |
| 0.1845 | 338.46 | 4400 | 1.0945 | 0.6446 | 0.645 |
| 0.1781 | 353.85 | 4600 | 1.1064 | 0.6517 | 0.652 |
| 0.1744 | 369.23 | 4800 | 1.0515 | 0.6446 | 0.645 |
| 0.1686 | 384.62 | 5000 | 1.1528 | 0.6410 | 0.641 |
| 0.1629 | 400.0 | 5200 | 1.1276 | 0.6497 | 0.65 |
| 0.1603 | 415.38 | 5400 | 1.1454 | 0.6440 | 0.644 |
| 0.1545 | 430.77 | 5600 | 1.1696 | 0.6439 | 0.644 |
| 0.1517 | 446.15 | 5800 | 1.1707 | 0.6580 | 0.658 |
| 0.1487 | 461.54 | 6000 | 1.1869 | 0.6520 | 0.652 |
| 0.1442 | 476.92 | 6200 | 1.1954 | 0.6467 | 0.647 |
| 0.1395 | 492.31 | 6400 | 1.1764 | 0.6510 | 0.651 |
| 0.138 | 507.69 | 6600 | 1.2327 | 0.6529 | 0.653 |
| 0.1341 | 523.08 | 6800 | 1.2311 | 0.6540 | 0.654 |
| 0.1334 | 538.46 | 7000 | 1.2414 | 0.6509 | 0.651 |
| 0.1306 | 553.85 | 7200 | 1.2352 | 0.6510 | 0.651 |
| 0.1278 | 569.23 | 7400 | 1.2354 | 0.6480 | 0.648 |
| 0.1266 | 584.62 | 7600 | 1.2391 | 0.6500 | 0.65 |
| 0.1242 | 600.0 | 7800 | 1.2287 | 0.6490 | 0.649 |
| 0.1203 | 615.38 | 8000 | 1.3224 | 0.652 | 0.652 |
| 0.1191 | 630.77 | 8200 | 1.2870 | 0.6499 | 0.65 |
| 0.1171 | 646.15 | 8400 | 1.2742 | 0.6500 | 0.65 |
| 0.1156 | 661.54 | 8600 | 1.2775 | 0.6489 | 0.649 |
| 0.1149 | 676.92 | 8800 | 1.2976 | 0.6460 | 0.646 |
| 0.1137 | 692.31 | 9000 | 1.3042 | 0.6489 | 0.649 |
| 0.1136 | 707.69 | 9200 | 1.2813 | 0.6510 | 0.651 |
| 0.1111 | 723.08 | 9400 | 1.3022 | 0.6480 | 0.648 |
| 0.113 | 738.46 | 9600 | 1.2807 | 0.6479 | 0.648 |
| 0.1094 | 753.85 | 9800 | 1.3084 | 0.6490 | 0.649 |
| 0.1097 | 769.23 | 10000 | 1.3013 | 0.6510 | 0.651 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_34M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_34M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_34M",
"region:us"
]
| null | 2024-04-16T12:55:34+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
| GUE\_tf\_2-seqsight\_16384\_512\_34M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1053
* F1 Score: 0.6779
* Accuracy: 0.678
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1464
- F1 Score: 0.6080
- Accuracy: 0.6085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5294 | 66.67 | 200 | 0.9298 | 0.6348 | 0.6378 |
| 0.2241 | 133.33 | 400 | 1.3528 | 0.6335 | 0.6362 |
| 0.1256 | 200.0 | 600 | 1.5321 | 0.6348 | 0.6346 |
| 0.0885 | 266.67 | 800 | 1.7238 | 0.6342 | 0.6395 |
| 0.0633 | 333.33 | 1000 | 1.8965 | 0.6506 | 0.6509 |
| 0.0507 | 400.0 | 1200 | 2.1681 | 0.6392 | 0.6395 |
| 0.0424 | 466.67 | 1400 | 2.0712 | 0.6397 | 0.6395 |
| 0.0355 | 533.33 | 1600 | 2.4078 | 0.6348 | 0.6362 |
| 0.0316 | 600.0 | 1800 | 2.2623 | 0.6396 | 0.6395 |
| 0.0274 | 666.67 | 2000 | 2.5492 | 0.6277 | 0.6281 |
| 0.0252 | 733.33 | 2200 | 2.6147 | 0.6360 | 0.6378 |
| 0.0245 | 800.0 | 2400 | 2.6290 | 0.6393 | 0.6411 |
| 0.0229 | 866.67 | 2600 | 2.3514 | 0.6391 | 0.6411 |
| 0.0214 | 933.33 | 2800 | 2.3575 | 0.6465 | 0.6476 |
| 0.0189 | 1000.0 | 3000 | 2.5195 | 0.6314 | 0.6313 |
| 0.0181 | 1066.67 | 3200 | 2.5493 | 0.6329 | 0.6330 |
| 0.0176 | 1133.33 | 3400 | 2.7595 | 0.6458 | 0.6476 |
| 0.0174 | 1200.0 | 3600 | 2.5493 | 0.6538 | 0.6542 |
| 0.0153 | 1266.67 | 3800 | 2.6045 | 0.6413 | 0.6411 |
| 0.0144 | 1333.33 | 4000 | 2.7520 | 0.6487 | 0.6493 |
| 0.0153 | 1400.0 | 4200 | 2.7680 | 0.6413 | 0.6444 |
| 0.0142 | 1466.67 | 4400 | 2.6089 | 0.6522 | 0.6525 |
| 0.0138 | 1533.33 | 4600 | 2.7563 | 0.6575 | 0.6574 |
| 0.013 | 1600.0 | 4800 | 2.7806 | 0.6604 | 0.6607 |
| 0.0122 | 1666.67 | 5000 | 2.8860 | 0.6493 | 0.6493 |
| 0.0125 | 1733.33 | 5200 | 2.7067 | 0.6454 | 0.6460 |
| 0.0121 | 1800.0 | 5400 | 2.7190 | 0.6422 | 0.6427 |
| 0.0117 | 1866.67 | 5600 | 3.1279 | 0.6449 | 0.6476 |
| 0.011 | 1933.33 | 5800 | 2.9516 | 0.6545 | 0.6558 |
| 0.0109 | 2000.0 | 6000 | 2.8848 | 0.6298 | 0.6297 |
| 0.01 | 2066.67 | 6200 | 2.9372 | 0.6434 | 0.6444 |
| 0.0101 | 2133.33 | 6400 | 2.8952 | 0.6363 | 0.6395 |
| 0.0105 | 2200.0 | 6600 | 2.9215 | 0.6379 | 0.6378 |
| 0.0095 | 2266.67 | 6800 | 3.0353 | 0.6345 | 0.6362 |
| 0.0098 | 2333.33 | 7000 | 2.8501 | 0.6405 | 0.6411 |
| 0.0095 | 2400.0 | 7200 | 2.7763 | 0.6476 | 0.6476 |
| 0.009 | 2466.67 | 7400 | 2.8183 | 0.6319 | 0.6346 |
| 0.0092 | 2533.33 | 7600 | 2.7930 | 0.6421 | 0.6427 |
| 0.0086 | 2600.0 | 7800 | 2.6217 | 0.6331 | 0.6330 |
| 0.0083 | 2666.67 | 8000 | 2.8680 | 0.6483 | 0.6493 |
| 0.008 | 2733.33 | 8200 | 3.0055 | 0.6399 | 0.6411 |
| 0.0079 | 2800.0 | 8400 | 2.8192 | 0.6474 | 0.6476 |
| 0.0078 | 2866.67 | 8600 | 2.9098 | 0.6452 | 0.6460 |
| 0.008 | 2933.33 | 8800 | 2.9461 | 0.6466 | 0.6493 |
| 0.0076 | 3000.0 | 9000 | 2.9566 | 0.6461 | 0.6460 |
| 0.0078 | 3066.67 | 9200 | 2.8579 | 0.6377 | 0.6378 |
| 0.0074 | 3133.33 | 9400 | 2.9272 | 0.6423 | 0.6427 |
| 0.0073 | 3200.0 | 9600 | 2.9180 | 0.6452 | 0.6460 |
| 0.007 | 3266.67 | 9800 | 3.0092 | 0.6423 | 0.6427 |
| 0.007 | 3333.33 | 10000 | 3.0250 | 0.6402 | 0.6411 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
]
| null | 2024-04-16T12:58:44+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_56M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1464
* F1 Score: 0.6080
* Accuracy: 0.6085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: QuantumInstability/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | QuantumInstability/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| null | 2024-04-16T12:59:10+00:00 | []
| []
| TAGS
#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
|
# ppo Agent playing SnowballTarget
This is a trained model of a ppo agent playing SnowballTarget
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: QuantumInstability/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: QuantumInstability/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
]
| [
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n",
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: QuantumInstability/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
]
|
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | mncai/EZCT_BGEM3_EMBEDDING | null | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:02:21+00:00 | []
| []
| TAGS
#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
| [
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
]
| [
"TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
]
|
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Moistral-11B-v2](https://huggingface.co/TheDrummer/Moistral-11B-v2)
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: TheDrummer/Moistral-11B-v2
layer_range: [0, 48]
- model: Sao10K/Fimbulvetr-11B-v2
layer_range: [0, 48]
merge_method: slerp
base_model: Sao10K/Fimbulvetr-11B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["TheDrummer/Moistral-11B-v2", "Sao10K/Fimbulvetr-11B-v2"]} | Tokerss/testmoisfimu | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:TheDrummer/Moistral-11B-v2",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T13:02:25+00:00 | []
| []
| TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* TheDrummer/Moistral-11B-v2
* Sao10K/Fimbulvetr-11B-v2
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Moistral-11B-v2\n* Sao10K/Fimbulvetr-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
]
| [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Moistral-11B-v2\n* Sao10K/Fimbulvetr-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
]
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_pedes_appearance | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
]
| null | 2024-04-16T13:03:43+00:00 | [
"1910.09700"
]
| []
| TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
]
| [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-hf-platypus-lamini-vxxiii-chat-real
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-hf-platypus-lamini-vxxiii-chat-real", "results": []}]} | NassimB/mistral-7b-hf-platypus-lamini-vxxiii-chat-real | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T13:03:59+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# mistral-7b-hf-platypus-lamini-vxxiii-chat-real
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 | [
"# mistral-7b-hf-platypus-lamini-vxxiii-chat-real\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# mistral-7b-hf-platypus-lamini-vxxiii-chat-real\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-tmo
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-tmo", "results": []}]} | rebeccaD/phi-2-tmo | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-04-16T13:04:52+00:00 | []
| []
| TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
|
# phi-2-tmo
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# phi-2-tmo\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
]
| [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# phi-2-tmo\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5456
- F1 Score: 0.8528
- Accuracy: 0.8528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5588 | 9.52 | 200 | 0.4555 | 0.7855 | 0.7858 |
| 0.4225 | 19.05 | 400 | 0.4199 | 0.8008 | 0.8018 |
| 0.3683 | 28.57 | 600 | 0.3997 | 0.8248 | 0.8249 |
| 0.3104 | 38.1 | 800 | 0.3929 | 0.8342 | 0.8342 |
| 0.2687 | 47.62 | 1000 | 0.4070 | 0.8367 | 0.8368 |
| 0.2403 | 57.14 | 1200 | 0.3964 | 0.8451 | 0.8451 |
| 0.2173 | 66.67 | 1400 | 0.4090 | 0.8468 | 0.8468 |
| 0.2004 | 76.19 | 1600 | 0.4122 | 0.8460 | 0.8461 |
| 0.1846 | 85.71 | 1800 | 0.4488 | 0.8460 | 0.8461 |
| 0.174 | 95.24 | 2000 | 0.4238 | 0.8536 | 0.8536 |
| 0.1623 | 104.76 | 2200 | 0.4487 | 0.8511 | 0.8511 |
| 0.1531 | 114.29 | 2400 | 0.4603 | 0.8508 | 0.8508 |
| 0.1466 | 123.81 | 2600 | 0.4721 | 0.8503 | 0.8504 |
| 0.1381 | 133.33 | 2800 | 0.4789 | 0.8467 | 0.8468 |
| 0.1329 | 142.86 | 3000 | 0.5252 | 0.8393 | 0.8398 |
| 0.1285 | 152.38 | 3200 | 0.4892 | 0.8528 | 0.8528 |
| 0.123 | 161.9 | 3400 | 0.4957 | 0.8465 | 0.8466 |
| 0.1176 | 171.43 | 3600 | 0.5105 | 0.8485 | 0.8487 |
| 0.1137 | 180.95 | 3800 | 0.5028 | 0.8528 | 0.8528 |
| 0.1095 | 190.48 | 4000 | 0.5208 | 0.8497 | 0.8498 |
| 0.1053 | 200.0 | 4200 | 0.5189 | 0.8492 | 0.8493 |
| 0.1033 | 209.52 | 4400 | 0.5436 | 0.8479 | 0.8481 |
| 0.1006 | 219.05 | 4600 | 0.5169 | 0.8486 | 0.8489 |
| 0.0962 | 228.57 | 4800 | 0.5129 | 0.8494 | 0.8494 |
| 0.0941 | 238.1 | 5000 | 0.5197 | 0.8554 | 0.8555 |
| 0.0917 | 247.62 | 5200 | 0.5394 | 0.8489 | 0.8491 |
| 0.0892 | 257.14 | 5400 | 0.5360 | 0.8497 | 0.8498 |
| 0.0872 | 266.67 | 5600 | 0.5590 | 0.8484 | 0.8485 |
| 0.0849 | 276.19 | 5800 | 0.5273 | 0.8569 | 0.8570 |
| 0.0823 | 285.71 | 6000 | 0.5461 | 0.8533 | 0.8534 |
| 0.0815 | 295.24 | 6200 | 0.5807 | 0.8521 | 0.8523 |
| 0.0797 | 304.76 | 6400 | 0.5832 | 0.8492 | 0.8494 |
| 0.0781 | 314.29 | 6600 | 0.5580 | 0.8541 | 0.8542 |
| 0.0768 | 323.81 | 6800 | 0.6014 | 0.8458 | 0.8462 |
| 0.0743 | 333.33 | 7000 | 0.5600 | 0.8533 | 0.8534 |
| 0.0741 | 342.86 | 7200 | 0.5807 | 0.8526 | 0.8528 |
| 0.0719 | 352.38 | 7400 | 0.5821 | 0.8502 | 0.8504 |
| 0.0709 | 361.9 | 7600 | 0.5848 | 0.8514 | 0.8515 |
| 0.0693 | 371.43 | 7800 | 0.5822 | 0.8546 | 0.8547 |
| 0.0695 | 380.95 | 8000 | 0.5606 | 0.8567 | 0.8568 |
| 0.0672 | 390.48 | 8200 | 0.5978 | 0.8508 | 0.8510 |
| 0.067 | 400.0 | 8400 | 0.5924 | 0.8504 | 0.8506 |
| 0.0658 | 409.52 | 8600 | 0.5799 | 0.8554 | 0.8555 |
| 0.0656 | 419.05 | 8800 | 0.5933 | 0.8523 | 0.8525 |
| 0.0646 | 428.57 | 9000 | 0.5842 | 0.8527 | 0.8528 |
| 0.0634 | 438.1 | 9200 | 0.6048 | 0.8497 | 0.8498 |
| 0.0632 | 447.62 | 9400 | 0.6036 | 0.8496 | 0.8498 |
| 0.0631 | 457.14 | 9600 | 0.5932 | 0.8520 | 0.8521 |
| 0.0625 | 466.67 | 9800 | 0.5977 | 0.8504 | 0.8506 |
| 0.0621 | 476.19 | 10000 | 0.5965 | 0.8525 | 0.8526 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
]
| null | 2024-04-16T13:05:35+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_56M-L32\_all
===============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5456
* F1 Score: 0.8528
* Accuracy: 0.8528
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# stablelm-2-1_6b-dare1
rinna-3.6b-dare1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
* [stabilityai/stablelm-2-1_6b-chat](https://huggingface.co/stabilityai/stablelm-2-1_6b-chat)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 24]
model: stabilityai/stablelm-2-zephyr-1_6b
parameters:
density: [1, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 24]
model: stabilityai/stablelm-2-1_6b-chat
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: stabilityai/stablelm-2-zephyr-1_6b
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/rinna-3.6b-dare1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "stabilityai/stablelm-2-zephyr-1_6b", "stabilityai/stablelm-2-1_6b-chat"], "base_model": ["stabilityai/stablelm-2-zephyr-1_6b", "stabilityai/stablelm-2-1_6b-chat"]} | aipib/stablelm-2-1_6b-dare1 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"stabilityai/stablelm-2-zephyr-1_6b",
"stabilityai/stablelm-2-1_6b-chat",
"conversational",
"base_model:stabilityai/stablelm-2-zephyr-1_6b",
"base_model:stabilityai/stablelm-2-1_6b-chat",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:06:19+00:00 | []
| []
| TAGS
#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #stabilityai/stablelm-2-zephyr-1_6b #stabilityai/stablelm-2-1_6b-chat #conversational #base_model-stabilityai/stablelm-2-zephyr-1_6b #base_model-stabilityai/stablelm-2-1_6b-chat #autotrain_compatible #endpoints_compatible #region-us
|
# stablelm-2-1_6b-dare1
rinna-3.6b-dare1 is a merge of the following models using LazyMergekit:
* stabilityai/stablelm-2-zephyr-1_6b
* stabilityai/stablelm-2-1_6b-chat
## Configuration
## Usage
| [
"# stablelm-2-1_6b-dare1\n\nrinna-3.6b-dare1 is a merge of the following models using LazyMergekit:\n* stabilityai/stablelm-2-zephyr-1_6b\n* stabilityai/stablelm-2-1_6b-chat",
"## Configuration",
"## Usage"
]
| [
"TAGS\n#transformers #safetensors #stablelm #text-generation #merge #mergekit #lazymergekit #stabilityai/stablelm-2-zephyr-1_6b #stabilityai/stablelm-2-1_6b-chat #conversational #base_model-stabilityai/stablelm-2-zephyr-1_6b #base_model-stabilityai/stablelm-2-1_6b-chat #autotrain_compatible #endpoints_compatible #region-us \n",
"# stablelm-2-1_6b-dare1\n\nrinna-3.6b-dare1 is a merge of the following models using LazyMergekit:\n* stabilityai/stablelm-2-zephyr-1_6b\n* stabilityai/stablelm-2-1_6b-chat",
"## Configuration",
"## Usage"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6038
- F1 Score: 0.7052
- Accuracy: 0.7052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.649 | 8.33 | 200 | 0.6020 | 0.6768 | 0.6779 |
| 0.576 | 16.67 | 400 | 0.5865 | 0.6922 | 0.6922 |
| 0.5485 | 25.0 | 600 | 0.5837 | 0.7023 | 0.7024 |
| 0.5226 | 33.33 | 800 | 0.5926 | 0.7026 | 0.7029 |
| 0.5005 | 41.67 | 1000 | 0.6006 | 0.7031 | 0.7032 |
| 0.4811 | 50.0 | 1200 | 0.6294 | 0.6964 | 0.6981 |
| 0.4647 | 58.33 | 1400 | 0.6325 | 0.7008 | 0.7008 |
| 0.4496 | 66.67 | 1600 | 0.6143 | 0.6989 | 0.6995 |
| 0.4366 | 75.0 | 1800 | 0.6629 | 0.6853 | 0.6882 |
| 0.4249 | 83.33 | 2000 | 0.6709 | 0.7016 | 0.7017 |
| 0.4104 | 91.67 | 2200 | 0.6665 | 0.6998 | 0.7010 |
| 0.4001 | 100.0 | 2400 | 0.6915 | 0.6934 | 0.6951 |
| 0.3861 | 108.33 | 2600 | 0.6566 | 0.7008 | 0.7010 |
| 0.3734 | 116.67 | 2800 | 0.7044 | 0.6911 | 0.6921 |
| 0.3622 | 125.0 | 3000 | 0.7293 | 0.6989 | 0.6995 |
| 0.3507 | 133.33 | 3200 | 0.7270 | 0.6942 | 0.6953 |
| 0.3405 | 141.67 | 3400 | 0.7240 | 0.6920 | 0.6936 |
| 0.3282 | 150.0 | 3600 | 0.7479 | 0.6978 | 0.6981 |
| 0.32 | 158.33 | 3800 | 0.7638 | 0.6863 | 0.6883 |
| 0.3116 | 166.67 | 4000 | 0.7609 | 0.6885 | 0.6904 |
| 0.302 | 175.0 | 4200 | 0.8020 | 0.6789 | 0.6814 |
| 0.2936 | 183.33 | 4400 | 0.8012 | 0.6841 | 0.6856 |
| 0.2846 | 191.67 | 4600 | 0.8016 | 0.6911 | 0.6926 |
| 0.2789 | 200.0 | 4800 | 0.7724 | 0.6875 | 0.6882 |
| 0.2706 | 208.33 | 5000 | 0.8217 | 0.6927 | 0.6931 |
| 0.2649 | 216.67 | 5200 | 0.8195 | 0.6841 | 0.6858 |
| 0.2578 | 225.0 | 5400 | 0.8125 | 0.6809 | 0.6828 |
| 0.2507 | 233.33 | 5600 | 0.8336 | 0.6865 | 0.6875 |
| 0.2469 | 241.67 | 5800 | 0.8609 | 0.6848 | 0.6858 |
| 0.2414 | 250.0 | 6000 | 0.8436 | 0.6887 | 0.6894 |
| 0.237 | 258.33 | 6200 | 0.8663 | 0.6856 | 0.6870 |
| 0.2313 | 266.67 | 6400 | 0.8878 | 0.6865 | 0.6880 |
| 0.2249 | 275.0 | 6600 | 0.8736 | 0.6844 | 0.6856 |
| 0.2245 | 283.33 | 6800 | 0.9022 | 0.6800 | 0.6821 |
| 0.2189 | 291.67 | 7000 | 0.9023 | 0.6855 | 0.6875 |
| 0.216 | 300.0 | 7200 | 0.8939 | 0.6812 | 0.6828 |
| 0.2134 | 308.33 | 7400 | 0.9122 | 0.6806 | 0.6829 |
| 0.2093 | 316.67 | 7600 | 0.9027 | 0.6845 | 0.6858 |
| 0.2062 | 325.0 | 7800 | 0.9113 | 0.6830 | 0.6843 |
| 0.2048 | 333.33 | 8000 | 0.9314 | 0.6784 | 0.6807 |
| 0.2024 | 341.67 | 8200 | 0.9084 | 0.6860 | 0.6867 |
| 0.1994 | 350.0 | 8400 | 0.9240 | 0.6810 | 0.6828 |
| 0.1977 | 358.33 | 8600 | 0.9252 | 0.6843 | 0.6855 |
| 0.1964 | 366.67 | 8800 | 0.9129 | 0.6851 | 0.6863 |
| 0.1933 | 375.0 | 9000 | 0.9265 | 0.6881 | 0.6892 |
| 0.1927 | 383.33 | 9200 | 0.9311 | 0.6839 | 0.6853 |
| 0.1901 | 391.67 | 9400 | 0.9473 | 0.6799 | 0.6821 |
| 0.1899 | 400.0 | 9600 | 0.9397 | 0.6846 | 0.6863 |
| 0.1885 | 408.33 | 9800 | 0.9474 | 0.6838 | 0.6855 |
| 0.1897 | 416.67 | 10000 | 0.9424 | 0.6852 | 0.6867 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
]
| null | 2024-04-16T13:09:46+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_16384\_512\_56M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6038
* F1 Score: 0.7052
* Accuracy: 0.7052
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | houssemmoslah/mistral_sql_gen6000_merged | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T13:11:13+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | null |
# Percival_01Shadowm7exp-7B
Percival_01Shadowm7exp-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: AurelPx/Percival_01-7b-slerp
- model: mahiatlinux/ShadowM7EXP-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Percival_01Shadowm7exp-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Percival_01Shadowm7exp-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T13:11:52+00:00 | []
| []
| TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# Percival_01Shadowm7exp-7B
Percival_01Shadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# Percival_01Shadowm7exp-7B\n\nPercival_01Shadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
]
| [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# Percival_01Shadowm7exp-7B\n\nPercival_01Shadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cackerman/rewrites_gemma7_4bit_ft_full_good | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:13:06+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
text-to-image | diffusers | # ai video
<Gallery />
## Download model
[Download](/iorvy2013/videopro/tree/main) them in the Files & versions tab.
| {"license": "cc0-1.0", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "wrench icon", "parameters": {"negative_prompt": "stick"}, "output": {"url": "images/wrench_icon.png"}}, {"text": "game icon", "parameters": {"negative_prompt": "twisted stick"}, "output": {"url": "images/controller_icon.png"}}, {"text": "balloon", "parameters": {"negative_prompt": "among us"}, "output": {"url": "images/balloon.png"}}, {"text": "ice monkey", "output": {"url": "images/freeze_tower.png"}}, {"text": "ray of doom", "output": {"url": "images/laser_tower.png"}}, {"text": "quantum blaster", "output": {"url": "images/quantum_blaster.png"}}, {"text": "meadow", "output": {"url": "images/super meadow.jpg"}}, {"text": "island", "output": {"url": "images/background.png"}}, {"text": "stupid monkey", "output": {"url": "images/monkee.png"}}, {"text": "boomerang", "parameters": {"negative_prompt": "boomer aang"}, "output": {"url": "images/boomerang_monkey.png"}}, {"text": "tack shooter", "output": {"url": "images/tack_shooter.png"}}, {"text": "dart monkey ", "output": {"url": "images/dart_monkey.png"}}, {"text": "monke", "output": {"url": "images/monke.png"}}, {"text": "monkey king", "output": {"url": "images/Screenshot_2024-04-12_181108-removebg-preview.png"}}, {"text": "monkey in a tank", "output": {"url": "images/churchill.png"}}, {"text": "nature monkey", "output": {"url": "images/obyn.png"}}, {"text": "monkey with a bazooka", "output": {"url": "images/striker_jones.png"}}, {"text": "fire monkey", "output": {"url": "images/gwendolin.png"}}, {"text": "banana", "output": {"url": "images/banana.png"}}, {"text": "hacker", "output": {"url": "images/benjamin.png"}}, {"text": "arrows", "output": {"url": "images/arrow.png"}}, {"text": "super monk", "output": {"url": "images/supermonkey.png"}}, {"text": "dart monkey", "output": {"url": "images/monkey.png"}}, {"text": "dart", "output": {"url": "images/dart.png"}}, {"text": "car", "output": {"url": "images/car.png"}}], "base_model": "feizhengcong/video-stable-diffusion"} | iorvy2013/videopro | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:feizhengcong/video-stable-diffusion",
"license:cc0-1.0",
"region:us"
]
| null | 2024-04-16T13:17:10+00:00 | []
| []
| TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-feizhengcong/video-stable-diffusion #license-cc0-1.0 #region-us
| # ai video
<Gallery />
## Download model
Download them in the Files & versions tab.
| [
"# ai video\n\n<Gallery />",
"## Download model\n\n\nDownload them in the Files & versions tab."
]
| [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-feizhengcong/video-stable-diffusion #license-cc0-1.0 #region-us \n",
"# ai video\n\n<Gallery />",
"## Download model\n\n\nDownload them in the Files & versions tab."
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | NouRed/BioMed-Tuned-Gemma-7b | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:17:39+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | null |
# WizardLM-2-7B based on reupload at lucyknada/microsoft_WizardLM-2-7B
## GGUFs created with an importance matrix (details below)
This is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again, to see if MS did any changes.
Source Model: [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5dc9dd7152dedc6046b646855585bd070c91e8c8](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8) (master from 2024-04-09)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) | {"license": "apache-2.0", "tags": ["wizardlm", "microsoft", "instruct", "finetune", "gguf", "importance matrix", "imatrix"], "base_model": "lucyknada/microsoft_WizardLM-2-7B", "model-index": [{"name": "Not-WizardLM-2-7B-iMat-GGUF", "results": []}]} | qwp4w3hyb/Not-WizardLM-2-7B-iMat-GGUF | null | [
"gguf",
"wizardlm",
"microsoft",
"instruct",
"finetune",
"importance matrix",
"imatrix",
"base_model:lucyknada/microsoft_WizardLM-2-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T13:17:45+00:00 | []
| []
| TAGS
#gguf #wizardlm #microsoft #instruct #finetune #importance matrix #imatrix #base_model-lucyknada/microsoft_WizardLM-2-7B #license-apache-2.0 #region-us
|
# WizardLM-2-7B based on reupload at lucyknada/microsoft_WizardLM-2-7B
## GGUFs created with an importance matrix (details below)
This is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again, to see if MS did any changes.
Source Model: lucyknada/microsoft_WizardLM-2-7B
Quantized with URL commit 5dc9dd7152dedc6046b646855585bd070c91e8c8 (master from 2024-04-09)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/URL
Using the dataset from here | [
"# WizardLM-2-7B based on reupload at lucyknada/microsoft_WizardLM-2-7B",
"## GGUFs created with an importance matrix (details below)\n\nThis is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again, to see if MS did any changes.\n\nSource Model: lucyknada/microsoft_WizardLM-2-7B\n\nQuantized with URL commit 5dc9dd7152dedc6046b646855585bd070c91e8c8 (master from 2024-04-09)\n\nImatrix was generated from the f16 gguf via this command:\n\n./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/URL\n\nUsing the dataset from here"
]
| [
"TAGS\n#gguf #wizardlm #microsoft #instruct #finetune #importance matrix #imatrix #base_model-lucyknada/microsoft_WizardLM-2-7B #license-apache-2.0 #region-us \n",
"# WizardLM-2-7B based on reupload at lucyknada/microsoft_WizardLM-2-7B",
"## GGUFs created with an importance matrix (details below)\n\nThis is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again, to see if MS did any changes.\n\nSource Model: lucyknada/microsoft_WizardLM-2-7B\n\nQuantized with URL commit 5dc9dd7152dedc6046b646855585bd070c91e8c8 (master from 2024-04-09)\n\nImatrix was generated from the f16 gguf via this command:\n\n./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/URL\n\nUsing the dataset from here"
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Narkantak/TheBloke-Marcoroni-7B-v3-GPTQ-SonakshiV2 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:18:38+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6080
- F1 Score: 0.7190
- Accuracy: 0.7191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6437 | 9.52 | 200 | 0.5879 | 0.6905 | 0.6921 |
| 0.566 | 19.05 | 400 | 0.5655 | 0.7127 | 0.7126 |
| 0.5284 | 28.57 | 600 | 0.5827 | 0.7164 | 0.7175 |
| 0.4963 | 38.1 | 800 | 0.5919 | 0.7291 | 0.7292 |
| 0.4689 | 47.62 | 1000 | 0.6117 | 0.7132 | 0.7155 |
| 0.4466 | 57.14 | 1200 | 0.6078 | 0.7208 | 0.7213 |
| 0.4262 | 66.67 | 1400 | 0.6016 | 0.7125 | 0.7128 |
| 0.4105 | 76.19 | 1600 | 0.6208 | 0.7148 | 0.7157 |
| 0.3924 | 85.71 | 1800 | 0.6302 | 0.7198 | 0.7200 |
| 0.3771 | 95.24 | 2000 | 0.6648 | 0.7098 | 0.7117 |
| 0.3611 | 104.76 | 2200 | 0.6634 | 0.7167 | 0.7172 |
| 0.3442 | 114.29 | 2400 | 0.6783 | 0.7160 | 0.7170 |
| 0.3315 | 123.81 | 2600 | 0.6737 | 0.7148 | 0.7153 |
| 0.316 | 133.33 | 2800 | 0.7334 | 0.7109 | 0.7121 |
| 0.3039 | 142.86 | 3000 | 0.7383 | 0.7126 | 0.7136 |
| 0.291 | 152.38 | 3200 | 0.7175 | 0.7164 | 0.7164 |
| 0.2821 | 161.9 | 3400 | 0.7711 | 0.7056 | 0.7076 |
| 0.2695 | 171.43 | 3600 | 0.7863 | 0.7102 | 0.7109 |
| 0.2604 | 180.95 | 3800 | 0.7833 | 0.7007 | 0.7032 |
| 0.2512 | 190.48 | 4000 | 0.7737 | 0.7036 | 0.7044 |
| 0.2421 | 200.0 | 4200 | 0.7967 | 0.6996 | 0.7011 |
| 0.2346 | 209.52 | 4400 | 0.8259 | 0.6991 | 0.7011 |
| 0.2267 | 219.05 | 4600 | 0.8304 | 0.7062 | 0.7072 |
| 0.2209 | 228.57 | 4800 | 0.8490 | 0.7047 | 0.7057 |
| 0.2143 | 238.1 | 5000 | 0.8871 | 0.7031 | 0.7049 |
| 0.2068 | 247.62 | 5200 | 0.8664 | 0.7052 | 0.7057 |
| 0.2031 | 257.14 | 5400 | 0.8805 | 0.7049 | 0.7062 |
| 0.1972 | 266.67 | 5600 | 0.8870 | 0.7059 | 0.7070 |
| 0.1921 | 276.19 | 5800 | 0.9041 | 0.6996 | 0.7011 |
| 0.1871 | 285.71 | 6000 | 0.8822 | 0.7056 | 0.7062 |
| 0.1822 | 295.24 | 6200 | 0.9064 | 0.7064 | 0.7070 |
| 0.1797 | 304.76 | 6400 | 0.9427 | 0.6997 | 0.7013 |
| 0.1757 | 314.29 | 6600 | 0.9206 | 0.7008 | 0.7021 |
| 0.1715 | 323.81 | 6800 | 0.9360 | 0.7039 | 0.7049 |
| 0.1689 | 333.33 | 7000 | 0.9216 | 0.7040 | 0.7047 |
| 0.165 | 342.86 | 7200 | 0.9498 | 0.7025 | 0.7034 |
| 0.1624 | 352.38 | 7400 | 0.9588 | 0.7046 | 0.7057 |
| 0.16 | 361.9 | 7600 | 0.9227 | 0.7054 | 0.7060 |
| 0.1573 | 371.43 | 7800 | 0.9488 | 0.7076 | 0.7083 |
| 0.1545 | 380.95 | 8000 | 0.9579 | 0.7094 | 0.7102 |
| 0.1513 | 390.48 | 8200 | 0.9748 | 0.7091 | 0.7098 |
| 0.1497 | 400.0 | 8400 | 0.9730 | 0.7035 | 0.7044 |
| 0.1494 | 409.52 | 8600 | 0.9838 | 0.7058 | 0.7066 |
| 0.1473 | 419.05 | 8800 | 0.9767 | 0.7043 | 0.7053 |
| 0.1454 | 428.57 | 9000 | 0.9760 | 0.7093 | 0.7098 |
| 0.1449 | 438.1 | 9200 | 0.9781 | 0.7053 | 0.7062 |
| 0.1434 | 447.62 | 9400 | 0.9756 | 0.7035 | 0.7044 |
| 0.143 | 457.14 | 9600 | 0.9833 | 0.7044 | 0.7053 |
| 0.1421 | 466.67 | 9800 | 0.9851 | 0.7058 | 0.7066 |
| 0.1411 | 476.19 | 10000 | 0.9841 | 0.7062 | 0.7070 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
]
| null | 2024-04-16T13:19:12+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_16384\_512\_56M-L32\_all
================================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6080
* F1 Score: 0.7190
* Accuracy: 0.7191
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0009
- F1 Score: 0.7189
- Accuracy: 0.7194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5289 | 66.67 | 200 | 0.7654 | 0.6822 | 0.6835 |
| 0.2303 | 133.33 | 400 | 1.0177 | 0.6948 | 0.6949 |
| 0.1258 | 200.0 | 600 | 1.2379 | 0.6964 | 0.6966 |
| 0.0824 | 266.67 | 800 | 1.3453 | 0.6998 | 0.6998 |
| 0.0601 | 333.33 | 1000 | 1.4683 | 0.6964 | 0.6966 |
| 0.0469 | 400.0 | 1200 | 1.5224 | 0.7112 | 0.7113 |
| 0.0408 | 466.67 | 1400 | 1.6837 | 0.6965 | 0.6966 |
| 0.0351 | 533.33 | 1600 | 1.5540 | 0.7031 | 0.7031 |
| 0.0297 | 600.0 | 1800 | 1.5673 | 0.7095 | 0.7096 |
| 0.0269 | 666.67 | 2000 | 1.7968 | 0.7194 | 0.7194 |
| 0.0244 | 733.33 | 2200 | 1.7700 | 0.7096 | 0.7096 |
| 0.0225 | 800.0 | 2400 | 1.7323 | 0.7143 | 0.7145 |
| 0.0202 | 866.67 | 2600 | 1.7030 | 0.7227 | 0.7227 |
| 0.0186 | 933.33 | 2800 | 1.7457 | 0.7110 | 0.7113 |
| 0.0173 | 1000.0 | 3000 | 1.7269 | 0.7145 | 0.7145 |
| 0.0173 | 1066.67 | 3200 | 1.7901 | 0.7159 | 0.7162 |
| 0.0153 | 1133.33 | 3400 | 1.8107 | 0.7113 | 0.7113 |
| 0.0155 | 1200.0 | 3600 | 1.6873 | 0.7127 | 0.7129 |
| 0.0142 | 1266.67 | 3800 | 1.7735 | 0.7157 | 0.7162 |
| 0.0142 | 1333.33 | 4000 | 1.5975 | 0.7143 | 0.7145 |
| 0.0132 | 1400.0 | 4200 | 1.8750 | 0.7091 | 0.7096 |
| 0.0138 | 1466.67 | 4400 | 1.8411 | 0.7001 | 0.7015 |
| 0.0122 | 1533.33 | 4600 | 1.7627 | 0.7161 | 0.7162 |
| 0.0108 | 1600.0 | 4800 | 1.8558 | 0.7161 | 0.7162 |
| 0.011 | 1666.67 | 5000 | 1.7713 | 0.7177 | 0.7178 |
| 0.0106 | 1733.33 | 5200 | 1.8469 | 0.7241 | 0.7243 |
| 0.0103 | 1800.0 | 5400 | 1.8326 | 0.7223 | 0.7227 |
| 0.0103 | 1866.67 | 5600 | 1.7185 | 0.7145 | 0.7145 |
| 0.0097 | 1933.33 | 5800 | 1.8333 | 0.7242 | 0.7243 |
| 0.0092 | 2000.0 | 6000 | 1.7296 | 0.7143 | 0.7145 |
| 0.0093 | 2066.67 | 6200 | 1.7575 | 0.7303 | 0.7308 |
| 0.0088 | 2133.33 | 6400 | 1.7917 | 0.7128 | 0.7129 |
| 0.0088 | 2200.0 | 6600 | 1.7169 | 0.7161 | 0.7162 |
| 0.0082 | 2266.67 | 6800 | 1.8671 | 0.7227 | 0.7227 |
| 0.0081 | 2333.33 | 7000 | 1.8179 | 0.7144 | 0.7145 |
| 0.0079 | 2400.0 | 7200 | 1.9120 | 0.7226 | 0.7227 |
| 0.0078 | 2466.67 | 7400 | 2.1171 | 0.7258 | 0.7259 |
| 0.0079 | 2533.33 | 7600 | 1.7741 | 0.7144 | 0.7145 |
| 0.0075 | 2600.0 | 7800 | 2.0177 | 0.7162 | 0.7162 |
| 0.0075 | 2666.67 | 8000 | 1.8506 | 0.7226 | 0.7227 |
| 0.0073 | 2733.33 | 8200 | 1.9724 | 0.7275 | 0.7276 |
| 0.0069 | 2800.0 | 8400 | 1.9028 | 0.7194 | 0.7194 |
| 0.0069 | 2866.67 | 8600 | 1.8591 | 0.7177 | 0.7178 |
| 0.0068 | 2933.33 | 8800 | 1.8039 | 0.7161 | 0.7162 |
| 0.0065 | 3000.0 | 9000 | 1.8519 | 0.7161 | 0.7162 |
| 0.0064 | 3066.67 | 9200 | 1.9131 | 0.7178 | 0.7178 |
| 0.0067 | 3133.33 | 9400 | 1.9480 | 0.7226 | 0.7227 |
| 0.0064 | 3200.0 | 9600 | 1.9210 | 0.7209 | 0.7210 |
| 0.0063 | 3266.67 | 9800 | 1.9218 | 0.7210 | 0.7210 |
| 0.0062 | 3333.33 | 10000 | 1.9093 | 0.7194 | 0.7194 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
]
| null | 2024-04-16T13:19:30+00:00 | []
| []
| TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_16384\_512\_56M-L32\_all
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0009
* F1 Score: 0.7189
* Accuracy: 0.7194
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
]
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#dataset used: polinaeterna/pokemon-blip-captions
#code
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
from PIL import Image
import requests
#Preprocess the dataset
#Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions.
#To do so, load the processor class associated with the model you are about to fine-tune.
from transformers import AutoProcessor
checkpoint = "microsoft/git-base"
processor = AutoProcessor.from_pretrained(checkpoint)
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "kr-manish/git-base-pokemon" # Replace with your actual username and model name
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png" # Replace with the URL of your image
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt").to(device)
generated_ids = model.generate(pixel_values=inputs.pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
#a pink and purple pokemon character with big eyes
```
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5797
- Wer Score: 8.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 8.155 | 4.17 | 50 | 6.4318 | 25.1325 |
| 5.3386 | 8.33 | 100 | 4.0782 | 18.6484 |
| 3.3109 | 12.5 | 150 | 2.4303 | 9.4306 |
| 2.0471 | 16.67 | 200 | 1.5797 | 8.9592 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/git-base", "model-index": [{"name": "git-base-pokemon", "results": []}]} | kr-manish/fine-tune-image-caption-pokemon | null | [
"transformers",
"tensorboard",
"safetensors",
"git",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/git-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:20:17+00:00 | []
| []
| TAGS
#transformers #tensorboard #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| #dataset used: polinaeterna/pokemon-blip-captions
#code
git-base-pokemon
================
This model is a fine-tuned version of microsoft/git-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5797
* Wer Score: 8.9592
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tensorboard #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
text-to-image | diffusers |
# AutoTrain SDXL LoRA DreamBooth - rfhuang/ben
<Gallery />
## Model description
These are rfhuang/ben LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use A photo of a person named Ben to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](rfhuang/ben/tree/main) them in the Files & versions tab.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of a person named Ben"} | rfhuang/ben | null | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| null | 2024-04-16T13:22:37+00:00 | []
| []
| TAGS
#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# AutoTrain SDXL LoRA DreamBooth - rfhuang/ben
<Gallery />
## Model description
These are rfhuang/ben LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use A photo of a person named Ben to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# AutoTrain SDXL LoRA DreamBooth - rfhuang/ben\n\n<Gallery />",
"## Model description\n\nThese are rfhuang/ben LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use A photo of a person named Ben to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
]
| [
"TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# AutoTrain SDXL LoRA DreamBooth - rfhuang/ben\n\n<Gallery />",
"## Model description\n\nThese are rfhuang/ben LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.",
"## Trigger words\n\nYou should use A photo of a person named Ben to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["unsloth"]} | ramixpe/4ep_adapter | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:23:31+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-intentv2.0
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-finetuned-intentv2.0", "results": []}]} | mohits01/phi-2-finetuned-intentv2.0 | null | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-04-16T13:23:32+00:00 | []
| []
| TAGS
#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us
|
# phi-2-finetuned-intentv2.0
This model is a fine-tuned version of microsoft/phi-2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-2-finetuned-intentv2.0\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
| [
"TAGS\n#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-2 #license-mit #region-us \n",
"# phi-2-finetuned-intentv2.0\n\nThis model is a fine-tuned version of microsoft/phi-2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 6\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 50\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
]
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/LlamaAdapter-llama2-happy-300-prompt-system | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:23:45+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-nl-vl
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the flemish-mozilla-common-voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1278
- Wer: 6.1734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0579 | 1.33 | 1000 | 0.1254 | 7.7204 |
| 0.0235 | 2.67 | 2000 | 0.1159 | 6.5565 |
| 0.0124 | 4.0 | 3000 | 0.1209 | 6.4843 |
| 0.0019 | 5.33 | 4000 | 0.1249 | 6.2891 |
| 0.0012 | 6.67 | 5000 | 0.1278 | 6.1734 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["nl"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "whisper-small-nl-vl", "results": []}]} | fibleep/whisper-small-nl-vl | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:25:17+00:00 | []
| [
"nl"
]
| TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #nl #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
| whisper-small-nl-vl
===================
This model is a fine-tuned version of openai/whisper-small on the flemish-mozilla-common-voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1278
* Wer: 6.1734
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
| [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #nl #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
]
|
null | transformers |
# Tokerss/testmoisfimu-Q5_K_M-GGUF
This model was converted to GGUF format from [`Tokerss/testmoisfimu`](https://huggingface.co/Tokerss/testmoisfimu) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Tokerss/testmoisfimu) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Tokerss/testmoisfimu-Q5_K_M-GGUF --model testmoisfimu.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Tokerss/testmoisfimu-Q5_K_M-GGUF --model testmoisfimu.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m testmoisfimu.Q5_K_M.gguf -n 128
```
| {"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["TheDrummer/Moistral-11B-v2", "Sao10K/Fimbulvetr-11B-v2"]} | Tokerss/testmoisfimu-Q5_K_M-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Moistral-11B-v2",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T13:26:12+00:00 | []
| []
| TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #endpoints_compatible #region-us
|
# Tokerss/testmoisfimu-Q5_K_M-GGUF
This model was converted to GGUF format from 'Tokerss/testmoisfimu' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Tokerss/testmoisfimu-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Tokerss/testmoisfimu' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
| [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #endpoints_compatible #region-us \n",
"# Tokerss/testmoisfimu-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Tokerss/testmoisfimu' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
]
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Prashant-karwasra/GPT2_short_stroy_generation_model | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-16T13:27:31+00:00 | [
"1910.09700"
]
| []
| TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
| [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.