pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5647
- F1 Score: 0.8132
- Accuracy: 0.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5686 | 8.33 | 200 | 0.4937 | 0.7658 | 0.7672 |
| 0.4736 | 16.67 | 400 | 0.4723 | 0.7806 | 0.7809 |
| 0.4369 | 25.0 | 600 | 0.4712 | 0.7865 | 0.7867 |
| 0.4014 | 33.33 | 800 | 0.4349 | 0.8004 | 0.8005 |
| 0.3606 | 41.67 | 1000 | 0.4249 | 0.8090 | 0.8091 |
| 0.3351 | 50.0 | 1200 | 0.4414 | 0.8122 | 0.8125 |
| 0.3118 | 58.33 | 1400 | 0.4435 | 0.8125 | 0.8128 |
| 0.2945 | 66.67 | 1600 | 0.4443 | 0.8204 | 0.8204 |
| 0.2779 | 75.0 | 1800 | 0.4247 | 0.8221 | 0.8221 |
| 0.2654 | 83.33 | 2000 | 0.4536 | 0.8214 | 0.8215 |
| 0.2542 | 91.67 | 2200 | 0.4369 | 0.8224 | 0.8225 |
| 0.2431 | 100.0 | 2400 | 0.4653 | 0.8179 | 0.8186 |
| 0.2338 | 108.33 | 2600 | 0.4928 | 0.8157 | 0.8160 |
| 0.2236 | 116.67 | 2800 | 0.4846 | 0.8155 | 0.8162 |
| 0.2193 | 125.0 | 3000 | 0.4841 | 0.8197 | 0.8203 |
| 0.2105 | 133.33 | 3200 | 0.4972 | 0.8189 | 0.8194 |
| 0.2038 | 141.67 | 3400 | 0.5048 | 0.8227 | 0.8228 |
| 0.1947 | 150.0 | 3600 | 0.5390 | 0.8146 | 0.8152 |
| 0.1903 | 158.33 | 3800 | 0.4964 | 0.8265 | 0.8265 |
| 0.1843 | 166.67 | 4000 | 0.5053 | 0.8230 | 0.8230 |
| 0.1813 | 175.0 | 4200 | 0.5241 | 0.8227 | 0.8228 |
| 0.1751 | 183.33 | 4400 | 0.5204 | 0.8241 | 0.8242 |
| 0.1698 | 191.67 | 4600 | 0.5326 | 0.8196 | 0.8198 |
| 0.1645 | 200.0 | 4800 | 0.5261 | 0.8254 | 0.8255 |
| 0.1612 | 208.33 | 5000 | 0.5695 | 0.8221 | 0.8225 |
| 0.1582 | 216.67 | 5200 | 0.5583 | 0.8194 | 0.8198 |
| 0.1539 | 225.0 | 5400 | 0.5441 | 0.8235 | 0.8235 |
| 0.1526 | 233.33 | 5600 | 0.5565 | 0.8228 | 0.8230 |
| 0.1484 | 241.67 | 5800 | 0.5502 | 0.8215 | 0.8216 |
| 0.1431 | 250.0 | 6000 | 0.6173 | 0.8129 | 0.8135 |
| 0.1438 | 258.33 | 6200 | 0.5684 | 0.8199 | 0.8201 |
| 0.1398 | 266.67 | 6400 | 0.5697 | 0.8177 | 0.8179 |
| 0.1362 | 275.0 | 6600 | 0.5935 | 0.8164 | 0.8171 |
| 0.1343 | 283.33 | 6800 | 0.5926 | 0.8206 | 0.8208 |
| 0.1328 | 291.67 | 7000 | 0.5912 | 0.8216 | 0.8220 |
| 0.1297 | 300.0 | 7200 | 0.6048 | 0.8172 | 0.8177 |
| 0.128 | 308.33 | 7400 | 0.6068 | 0.8169 | 0.8176 |
| 0.1266 | 316.67 | 7600 | 0.6094 | 0.8191 | 0.8196 |
| 0.1254 | 325.0 | 7800 | 0.6123 | 0.8199 | 0.8203 |
| 0.1215 | 333.33 | 8000 | 0.6221 | 0.8228 | 0.8231 |
| 0.1216 | 341.67 | 8200 | 0.6206 | 0.8208 | 0.8213 |
| 0.1184 | 350.0 | 8400 | 0.6036 | 0.8252 | 0.8253 |
| 0.1191 | 358.33 | 8600 | 0.6289 | 0.8216 | 0.8220 |
| 0.1173 | 366.67 | 8800 | 0.6381 | 0.8199 | 0.8204 |
| 0.117 | 375.0 | 9000 | 0.6205 | 0.8219 | 0.8223 |
| 0.1153 | 383.33 | 9200 | 0.6323 | 0.8189 | 0.8194 |
| 0.1124 | 391.67 | 9400 | 0.6287 | 0.8232 | 0.8235 |
| 0.1138 | 400.0 | 9600 | 0.6192 | 0.8228 | 0.8231 |
| 0.1142 | 408.33 | 9800 | 0.6188 | 0.8216 | 0.8220 |
| 0.1133 | 416.67 | 10000 | 0.6225 | 0.8221 | 0.8225 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T13:29:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_16384\_512\_56M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5647
* F1 Score: 0.8132
* Accuracy: 0.8132
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.01-len_4-filtered
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.01-len_4-filtered", "results": []}]} | Shalazary/ruBert-base-sberquad-0.01-len_4-filtered | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T13:31:01+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.01-len_4-filtered
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# ruBert-base-sberquad-0.01-len_4-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.01-len_4-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | null |
This is just an `imatrix` file I created using `groups_merged.txt`:
```
./imatrix --chunks 105 -m WizardLM-2:8x22b-f16.gguf -f groups_merged.txt -o WizardLM-2:8x22b-f16.imatrix
```
NOTE: 105 chunks is the maximum for `groups_merged.txt` using (recomeneded) default `n_ctx=512`.
| {"license": "apache-2.0"} | jukofyork/WizardLM-2-8x22B-imatrix | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T13:32:43+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
This is just an 'imatrix' file I created using 'groups_merged.txt':
NOTE: 105 chunks is the maximum for 'groups_merged.txt' using (recomeneded) default 'n_ctx=512'.
| [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
text-to-image | null |
## shoejob
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - A footjob, but with shoes on. Even more niche!
Mostly works with shiny high heels, platforms, and boots. Uses regular footjob images from my footjob lora to help reinforce the footjob concept, so this model is also good at plain old feet. Kept relatively simple for ease of use: All pov, all women, basically all centered.
Understands a tiny bit of posing, such as laying back / sitting up, laying on the floor, from above / sitting on a couch, from below, knees apart / knees together. Also trained for cum on shoes.
It could be improved or expanded in lots of ways, but it's been sitting for a while so I'm going ahead and uploading it for y'all.
[](https://imagepipeline.io/models/shoejob?id=11d7b443-3df2-4c53-afb4-89fa02be2b7b/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "11d7b443-3df2-4c53-afb4-89fa02be2b7b",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
| {"license": "creativeml-openrail-m", "tags": ["imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic"], "pinned": false, "pipeline_tag": "text-to-image"} | imagepipeline/shoejob | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-16T13:32:50+00:00 | [] | [] | TAGS
#imagepipeline #imagepipeline.io #text-to-image #ultra-realistic #license-creativeml-openrail-m #region-us
| shoejob
-------
<img src="URL alt="Generated on Image Pipeline" style="border-radius: 10px;">
This lora model is uploaded on URL
Model details - A footjob, but with shoes on. Even more niche!
Mostly works with shiny high heels, platforms, and boots. Uses regular footjob images from my footjob lora to help reinforce the footjob concept, so this model is also good at plain old feet. Kept relatively simple for ease of use: All pov, all women, basically all centered.
Understands a tiny bit of posing, such as laying back / sitting up, laying on the floor, from above / sitting on a couch, from below, knees apart / knees together. Also trained for cum on shoes.
It could be improved or expanded in lots of ways, but it's been sitting for a while so I'm going ahead and uploading it for y'all.
 for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/stairolzlv-70b-i1-GGUF/resolve/main/stairolzlv-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "uncensorie/stairolzlv-70b", "no_imatrix": "GGML_ASSERT: llama.cpp/ggml-quants.c:11239: grid_index >= 0", "quantized_by": "mradermacher"} | mradermacher/stairolzlv-70b-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:uncensorie/stairolzlv-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:33:52+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-uncensorie/stairolzlv-70b #license-llama2 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
No more quants are incoming, as URL crashes when generating them.
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-uncensorie/stairolzlv-70b #license-llama2 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: amazingvince/Not-WizardLM-2-7B
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
merge_method: slerp
base_model: amazingvince/Not-WizardLM-2-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["MaziyarPanahi/Calme-7B-Instruct-v0.9", "amazingvince/Not-WizardLM-2-7B"]} | mergekit-community/mergekit-slerp-exkkzvd | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:amazingvince/Not-WizardLM-2-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:34:52+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #base_model-amazingvince/Not-WizardLM-2-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* MaziyarPanahi/Calme-7B-Instruct-v0.9
* amazingvince/Not-WizardLM-2-7B
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.9\n* amazingvince/Not-WizardLM-2-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #base_model-amazingvince/Not-WizardLM-2-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.9\n* amazingvince/Not-WizardLM-2-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_pedes_environment | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:35:08+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saved_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "saved_model", "results": []}]} | Prashant-karwasra/short_story_generation | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:35:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# saved_model
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# saved_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 11",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# saved_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 11",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | DisgustingOzil/FineTuned_TROCR | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:36:08+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_pedes_location | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:38:09+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# slava
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 30.7275
- eval_runtime: 660.2038
- eval_samples_per_second: 1.212
- eval_steps_per_second: 1.212
- epoch: 0.1
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "slava", "results": []}]} | PranavBP525/slava | null | [
"peft",
"tensorboard",
"safetensors",
"llava",
"trl",
"sft",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"8-bit",
"region:us"
] | null | 2024-04-16T13:38:25+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #llava #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #8-bit #region-us
|
# slava
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 30.7275
- eval_runtime: 660.2038
- eval_samples_per_second: 1.212
- eval_steps_per_second: 1.212
- epoch: 0.1
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# slava\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 30.7275\n- eval_runtime: 660.2038\n- eval_samples_per_second: 1.212\n- eval_steps_per_second: 1.212\n- epoch: 0.1\n- step: 1000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #llava #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #8-bit #region-us \n",
"# slava\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 30.7275\n- eval_runtime: 660.2038\n- eval_samples_per_second: 1.212\n- eval_steps_per_second: 1.212\n- epoch: 0.1\n- step: 1000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6590
- F1 Score: 0.6148
- Accuracy: 0.6142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6697 | 11.11 | 200 | 0.6459 | 0.6277 | 0.6269 |
| 0.6003 | 22.22 | 400 | 0.6934 | 0.6105 | 0.6097 |
| 0.555 | 33.33 | 600 | 0.7179 | 0.6033 | 0.6045 |
| 0.5179 | 44.44 | 800 | 0.7867 | 0.6116 | 0.6097 |
| 0.4838 | 55.56 | 1000 | 0.7809 | 0.6018 | 0.5994 |
| 0.454 | 66.67 | 1200 | 0.7791 | 0.6085 | 0.6100 |
| 0.4331 | 77.78 | 1400 | 0.8767 | 0.6061 | 0.6033 |
| 0.414 | 88.89 | 1600 | 0.8371 | 0.6080 | 0.6064 |
| 0.3938 | 100.0 | 1800 | 0.8529 | 0.6009 | 0.5982 |
| 0.3779 | 111.11 | 2000 | 0.8368 | 0.6152 | 0.6154 |
| 0.3638 | 122.22 | 2200 | 0.9023 | 0.6126 | 0.6118 |
| 0.3472 | 133.33 | 2400 | 0.9253 | 0.6169 | 0.6166 |
| 0.3349 | 144.44 | 2600 | 0.9195 | 0.6030 | 0.6003 |
| 0.3201 | 155.56 | 2800 | 0.9672 | 0.6102 | 0.6076 |
| 0.3101 | 166.67 | 3000 | 0.9475 | 0.6081 | 0.6067 |
| 0.3009 | 177.78 | 3200 | 0.9801 | 0.6059 | 0.6070 |
| 0.2882 | 188.89 | 3400 | 0.9574 | 0.6048 | 0.6021 |
| 0.276 | 200.0 | 3600 | 0.9718 | 0.6087 | 0.6112 |
| 0.2699 | 211.11 | 3800 | 0.9806 | 0.6064 | 0.6061 |
| 0.2573 | 222.22 | 4000 | 1.0124 | 0.6092 | 0.6115 |
| 0.2534 | 233.33 | 4200 | 1.0417 | 0.6062 | 0.6045 |
| 0.242 | 244.44 | 4400 | 1.0477 | 0.6135 | 0.6115 |
| 0.2359 | 255.56 | 4600 | 1.0421 | 0.6067 | 0.6061 |
| 0.2302 | 266.67 | 4800 | 1.0248 | 0.6046 | 0.6027 |
| 0.221 | 277.78 | 5000 | 1.0949 | 0.6128 | 0.6142 |
| 0.2165 | 288.89 | 5200 | 1.0660 | 0.6116 | 0.6106 |
| 0.2087 | 300.0 | 5400 | 1.0838 | 0.6070 | 0.6051 |
| 0.2043 | 311.11 | 5600 | 1.1502 | 0.6111 | 0.6118 |
| 0.1997 | 322.22 | 5800 | 1.1312 | 0.6078 | 0.6064 |
| 0.1931 | 333.33 | 6000 | 1.1338 | 0.6045 | 0.6061 |
| 0.1907 | 344.44 | 6200 | 1.1509 | 0.6091 | 0.6073 |
| 0.1851 | 355.56 | 6400 | 1.1678 | 0.6088 | 0.6097 |
| 0.1811 | 366.67 | 6600 | 1.1535 | 0.6054 | 0.6042 |
| 0.1776 | 377.78 | 6800 | 1.1475 | 0.6102 | 0.6088 |
| 0.1732 | 388.89 | 7000 | 1.1601 | 0.6099 | 0.6088 |
| 0.1694 | 400.0 | 7200 | 1.1649 | 0.6050 | 0.6039 |
| 0.1667 | 411.11 | 7400 | 1.1752 | 0.6127 | 0.6121 |
| 0.1638 | 422.22 | 7600 | 1.2242 | 0.6129 | 0.6115 |
| 0.1633 | 433.33 | 7800 | 1.1803 | 0.6069 | 0.6070 |
| 0.1588 | 444.44 | 8000 | 1.2182 | 0.6098 | 0.6094 |
| 0.157 | 455.56 | 8200 | 1.2091 | 0.6075 | 0.6064 |
| 0.154 | 466.67 | 8400 | 1.2261 | 0.6099 | 0.6085 |
| 0.152 | 477.78 | 8600 | 1.2242 | 0.6113 | 0.6118 |
| 0.1501 | 488.89 | 8800 | 1.2262 | 0.6091 | 0.6082 |
| 0.1486 | 500.0 | 9000 | 1.2334 | 0.6081 | 0.6061 |
| 0.1471 | 511.11 | 9200 | 1.2229 | 0.6093 | 0.6091 |
| 0.1461 | 522.22 | 9400 | 1.2263 | 0.6080 | 0.6073 |
| 0.1466 | 533.33 | 9600 | 1.2236 | 0.6095 | 0.6082 |
| 0.143 | 544.44 | 9800 | 1.2325 | 0.6103 | 0.6094 |
| 0.1454 | 555.56 | 10000 | 1.2345 | 0.6091 | 0.6082 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T13:41:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_16384\_512\_56M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6590
* F1 Score: 0.6148
* Accuracy: 0.6142
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-33-layer](https://huggingface.co/Citaman/command-r-33-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-33-layer
layer_range: [0, 32]
- model: Citaman/command-r-33-layer
layer_range: [1, 33]
merge_method: slerp
base_model: Citaman/command-r-33-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-33-layer"]} | Citaman/command-r-32-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-33-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:42:10+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-33-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-33-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-33-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-33-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-33-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_pedes_attention | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:42:13+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4616 | 0.0 | 1 | 1.4223 |
| 1.4337 | 0.0 | 2 | 1.4204 |
| 1.6083 | 0.0 | 3 | 1.4186 |
| 1.0152 | 0.0 | 4 | 1.4168 |
| 1.5549 | 0.0 | 5 | 1.4150 |
| 1.4039 | 0.0 | 6 | 1.4132 |
| 1.0972 | 0.01 | 7 | 1.4115 |
| 1.4686 | 0.01 | 8 | 1.4098 |
| 1.3683 | 0.01 | 9 | 1.4081 |
| 1.2799 | 0.01 | 10 | 1.4065 |
| 1.2553 | 0.01 | 11 | 1.4048 |
| 1.3466 | 0.01 | 12 | 1.4032 |
| 1.1299 | 0.01 | 13 | 1.4016 |
| 1.8492 | 0.01 | 14 | 1.4000 |
| 1.3812 | 0.01 | 15 | 1.3985 |
| 1.1716 | 0.01 | 16 | 1.3970 |
| 1.1015 | 0.01 | 17 | 1.3955 |
| 1.5655 | 0.01 | 18 | 1.3942 |
| 1.4379 | 0.02 | 19 | 1.3930 |
| 1.2552 | 0.02 | 20 | 1.3918 |
| 1.1698 | 0.02 | 21 | 1.3907 |
| 1.3563 | 0.02 | 22 | 1.3897 |
| 1.6058 | 0.02 | 23 | 1.3889 |
| 1.4902 | 0.02 | 24 | 1.3881 |
| 1.6846 | 0.02 | 25 | 1.3874 |
| 1.2315 | 0.02 | 26 | 1.3868 |
| 1.0901 | 0.02 | 27 | 1.3863 |
| 1.2795 | 0.02 | 28 | 1.3860 |
| 1.1802 | 0.02 | 29 | 1.3857 |
| 1.2028 | 0.02 | 30 | 1.3856 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "results", "results": []}]} | kta-dev/results | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T13:42:39+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
| results
=======
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3856
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# Qwen1.5 Chat 4B for RK3588
This is a conversion from https://huggingface.co/Qwen/Qwen1.5-4B-Chat to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM https://huggingface.co/Qwen/Qwen1.5-4B-Chat | {"tags": ["qwen", "qwen1.5", "rkllm", "rockchip", "rk3588"]} | Pelochus/qwen1.5-chat-4B-rk3588 | null | [
"qwen",
"qwen1.5",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-04-16T13:43:09+00:00 | [] | [] | TAGS
#qwen #qwen1.5 #rkllm #rockchip #rk3588 #region-us
|
# Qwen1.5 Chat 4B for RK3588
This is a conversion from URL to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
URL
# License
Same as the original LLM URL | [
"# Qwen1.5 Chat 4B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588",
"# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL",
"# License\nSame as the original LLM URL"
] | [
"TAGS\n#qwen #qwen1.5 #rkllm #rockchip #rk3588 #region-us \n",
"# Qwen1.5 Chat 4B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588",
"# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL",
"# License\nSame as the original LLM URL"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mpasila/SeaMax-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SeaMax-7B-GGUF/resolve/main/SeaMax-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "mpasila/SeaMax-7B", "quantized_by": "mradermacher"} | mradermacher/SeaMax-7B-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/SeaMax-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:43:30+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mergekit #merge #en #base_model-mpasila/SeaMax-7B #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #mergekit #merge #en #base_model-mpasila/SeaMax-7B #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | heyllm234/sc44 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:43:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_pedes_rewrite | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:44:11+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_o_vehicle_rewrite | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:45:20+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_pedes_appearance | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:48:24+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-math-ver1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1289
- eval_precision: 0.9722
- eval_recall: 0.9722
- eval_f1: 0.9722
- eval_accuracy: 0.9722
- eval_runtime: 7.4496
- eval_samples_per_second: 169.136
- eval_steps_per_second: 14.095
- epoch: 5.0
- step: 2100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large-math-ver1", "results": []}]} | obamaTeo/deberta-v3-large-math-ver1 | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:50:04+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# deberta-v3-large-math-ver1
This model is a fine-tuned version of microsoft/deberta-v3-large on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1289
- eval_precision: 0.9722
- eval_recall: 0.9722
- eval_f1: 0.9722
- eval_accuracy: 0.9722
- eval_runtime: 7.4496
- eval_samples_per_second: 169.136
- eval_steps_per_second: 14.095
- epoch: 5.0
- step: 2100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# deberta-v3-large-math-ver1\n\nThis model is a fine-tuned version of microsoft/deberta-v3-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.1289\n- eval_precision: 0.9722\n- eval_recall: 0.9722\n- eval_f1: 0.9722\n- eval_accuracy: 0.9722\n- eval_runtime: 7.4496\n- eval_samples_per_second: 169.136\n- eval_steps_per_second: 14.095\n- epoch: 5.0\n- step: 2100",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 12\n- eval_batch_size: 12\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# deberta-v3-large-math-ver1\n\nThis model is a fine-tuned version of microsoft/deberta-v3-large on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.1289\n- eval_precision: 0.9722\n- eval_recall: 0.9722\n- eval_f1: 0.9722\n- eval_accuracy: 0.9722\n- eval_runtime: 7.4496\n- eval_samples_per_second: 169.136\n- eval_steps_per_second: 14.095\n- epoch: 5.0\n- step: 2100",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 12\n- eval_batch_size: 12\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_pedes_environment | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:51:16+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_pedes_location | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:52:30+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-32-layer](https://huggingface.co/Citaman/command-r-32-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-32-layer
layer_range: [0, 31]
- model: Citaman/command-r-32-layer
layer_range: [1, 32]
merge_method: slerp
base_model: Citaman/command-r-32-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-32-layer"]} | Citaman/command-r-31-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-32-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:52:36+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-32-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-32-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-32-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-32-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-32-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_pedes_attention | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:53:22+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
feature-extraction | transformers |
# 🧬 ChAda-ViT: Channel Adaptive Vision Transformer
Official PyTorch implementation and pretrained models of ChAda-ViT. For details, see **ChAda-ViT: Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images**
[[`arXiv`](https://arxiv.org/abs/2311.15264)]
## 🚀 Introduction
_**"How could we process images of different modalities, with different number of channels, and of different types all within one single Vision Transformer model ?"**_
**ChAda-ViT (Channel Adaptive Vision Transformer)** is meant to address a tricky challenge encountered in biological imaging: images span a variety of modalities, each with a different number, order, and type of channels, often bearing little correlation to each other. This complexity has long been a hurdle in the field.
**Our Solution:** ChAda-ViT utilizes an Inter-Channel & Inter-Channel Attention mechanism, tailored to handle images regardless of their channel diversity. This allows for the effective analysis of images from 1 to 10 channels per experiment, spanning 7 different microscope modalities.
**IDRCell100k Dataset:** Alongside ChAda-ViT, we introduce IDRCell100k, a comprehensive bioimage dataset encompassing 79 experiments coming from 7 different imaging methods. This rich resource is designed to fully leverage the capabilities of ChAda-ViT, offering an unprecedented diversity in microscopy and channel types.
**Impact:** Trained in a self-supervised manner, ChAda-ViT sets new benchmarks in biological image analysis. It not only excels in various biologically relevant tasks but also pioneers in bridging the gap across different assays. Whether it's varying microscopes, channel numbers, or types, ChAda-ViT offers a unified, powerful representation for biological images. This paves the way for enhanced interdisciplinary studies and broadens the horizon for deep learning applications in bioimage-based research.
<div align="center">
<img width="100%" alt="ChAda-ViT architecture" src="docs/chada_vit.png">
</div>
## 🗾 Dataset
The IDRCell100k dataset is a comprehensive collection of biological images, meticulously curated to represent a broad spectrum of microscopy techniques and channel configurations. It comprises 79 different experiments, utilizing 7 types of microscopy techniques, with images featuring channel counts ranging from 1 to 10. Each experiment contributes 1300 images, culminating in a total of 104,093 multiplexed images, each resized to 224x224 pixels. This dataset, unique in its diversity and scale, provides an invaluable resource for the development and validation of advanced image analysis models like ChAda-ViT, enhancing their capability to adapt to various imaging conditions and channel complexities in biological research.
Dataset available soon...
<div align="center">
<img width="70%" alt="IDRCell100k dataset samples" src="docs/idrcell100k.png">
</div>
## 📈 Results
This section provides a snapshot of the model's capabilities, with the paper offering a deeper dive into these groundbreaking findings.
For detailed analyses, comprehensive results, and in-depth discussions, please refer to the full paper.
### Classic Benchmarks
ChAda-ViT exhibits exceptional performance across a range of classical biological image benchmarks. Its advanced architecture allows for precise and efficient analysis, outperforming existing models in accuracy and computational efficiency. This highlights the model's significant contribution to the field of bioimaging.
<div align="center">
<img width="50%" alt="Vizualization of attention maps" src="docs/classic_benchmarks.png">
</div>
### Visualization of Attention Maps
The model's innovative Inter-Channel Attention mechanism is visualized, demonstrating its effectiveness in focusing on crucial features within diverse channel types. These visualizations provide insights into the model's internal processing, revealing how it distinguishes and prioritizes different aspects of biological images.
<div align="center">
<img width="80%" alt="Vizualization of attention maps" src="docs/attn_viz.png">
</div>
### Single Joint Embedding Space
ChAda-ViT uniquely embeds images from various modalities into a single, coherent representation space. This feature underscores the model's versatility and its ability to handle images from different microscopes, channel numbers, or types, facilitating a more unified approach in biological image analysis.
<div align="center">
<img width="60%" alt="Projection into a single joint embedding space" src="docs/single_joint_embedding_space.png">
</div>
## ⬇️ Installation
Clone the repository from Github:
```bash
git clone https://github.com/nicoboou/chada_vit.git
cd chada_vit
```
Use [Poetry](https://python-poetry.org/docs/#installation) to install the Python dependencies (via pip). This command creates an environment in a default location (in `~/.cache/pypoetry/virtualenvs/`). You can create and activate an environment, poetry will then install the dependencies in that environment:
```bash
poetry install --without dev # Install the dependencies
POETRY_ENV=$(poetry env info --path) # Get the path of the environment
source "$POETRY_ENV/bin/activate" # Activate the environment
```
For the pretrained weights, stay tuned !
## 🗣️ Citation
If you find this repository useful for your research, please cite the following paper as such:
```
@article{bourriez2023chada,
title={ChAda-ViT: Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images},
author={Bourriez, Nicolas and Bendidi, Ihab and Cohen, Ethan and Watkinson, Gabriel and Sanchez, Maxime and Bollot, Guillaume and Genovesio, Auguste},
journal={arXiv preprint arXiv:2311.15264},
year={2023}
}
``` | {"datasets": ["nicoboou/IDRCell100k"], "arxiv": 2311.15264} | nicoboou/chadavit16-moyen | null | [
"transformers",
"safetensors",
"chadavit",
"feature-extraction",
"custom_code",
"dataset:nicoboou/IDRCell100k",
"arxiv:2311.15264",
"region:us"
] | null | 2024-04-16T13:53:45+00:00 | [
"2311.15264"
] | [] | TAGS
#transformers #safetensors #chadavit #feature-extraction #custom_code #dataset-nicoboou/IDRCell100k #arxiv-2311.15264 #region-us
|
# ChAda-ViT: Channel Adaptive Vision Transformer
Official PyTorch implementation and pretrained models of ChAda-ViT. For details, see ChAda-ViT: Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images
['arXiv']
## Introduction
_"How could we process images of different modalities, with different number of channels, and of different types all within one single Vision Transformer model ?"_
ChAda-ViT (Channel Adaptive Vision Transformer) is meant to address a tricky challenge encountered in biological imaging: images span a variety of modalities, each with a different number, order, and type of channels, often bearing little correlation to each other. This complexity has long been a hurdle in the field.
Our Solution: ChAda-ViT utilizes an Inter-Channel & Inter-Channel Attention mechanism, tailored to handle images regardless of their channel diversity. This allows for the effective analysis of images from 1 to 10 channels per experiment, spanning 7 different microscope modalities.
IDRCell100k Dataset: Alongside ChAda-ViT, we introduce IDRCell100k, a comprehensive bioimage dataset encompassing 79 experiments coming from 7 different imaging methods. This rich resource is designed to fully leverage the capabilities of ChAda-ViT, offering an unprecedented diversity in microscopy and channel types.
Impact: Trained in a self-supervised manner, ChAda-ViT sets new benchmarks in biological image analysis. It not only excels in various biologically relevant tasks but also pioneers in bridging the gap across different assays. Whether it's varying microscopes, channel numbers, or types, ChAda-ViT offers a unified, powerful representation for biological images. This paves the way for enhanced interdisciplinary studies and broadens the horizon for deep learning applications in bioimage-based research.
<div align="center">
<img width="100%" alt="ChAda-ViT architecture" src="docs/chada_vit.png">
</div>
## Dataset
The IDRCell100k dataset is a comprehensive collection of biological images, meticulously curated to represent a broad spectrum of microscopy techniques and channel configurations. It comprises 79 different experiments, utilizing 7 types of microscopy techniques, with images featuring channel counts ranging from 1 to 10. Each experiment contributes 1300 images, culminating in a total of 104,093 multiplexed images, each resized to 224x224 pixels. This dataset, unique in its diversity and scale, provides an invaluable resource for the development and validation of advanced image analysis models like ChAda-ViT, enhancing their capability to adapt to various imaging conditions and channel complexities in biological research.
Dataset available soon...
<div align="center">
<img width="70%" alt="IDRCell100k dataset samples" src="docs/URL">
</div>
## Results
This section provides a snapshot of the model's capabilities, with the paper offering a deeper dive into these groundbreaking findings.
For detailed analyses, comprehensive results, and in-depth discussions, please refer to the full paper.
### Classic Benchmarks
ChAda-ViT exhibits exceptional performance across a range of classical biological image benchmarks. Its advanced architecture allows for precise and efficient analysis, outperforming existing models in accuracy and computational efficiency. This highlights the model's significant contribution to the field of bioimaging.
<div align="center">
<img width="50%" alt="Vizualization of attention maps" src="docs/classic_benchmarks.png">
</div>
### Visualization of Attention Maps
The model's innovative Inter-Channel Attention mechanism is visualized, demonstrating its effectiveness in focusing on crucial features within diverse channel types. These visualizations provide insights into the model's internal processing, revealing how it distinguishes and prioritizes different aspects of biological images.
<div align="center">
<img width="80%" alt="Vizualization of attention maps" src="docs/attn_viz.png">
</div>
### Single Joint Embedding Space
ChAda-ViT uniquely embeds images from various modalities into a single, coherent representation space. This feature underscores the model's versatility and its ability to handle images from different microscopes, channel numbers, or types, facilitating a more unified approach in biological image analysis.
<div align="center">
<img width="60%" alt="Projection into a single joint embedding space" src="docs/single_joint_embedding_space.png">
</div>
## ⬇️ Installation
Clone the repository from Github:
Use Poetry to install the Python dependencies (via pip). This command creates an environment in a default location (in '~/.cache/pypoetry/virtualenvs/'). You can create and activate an environment, poetry will then install the dependencies in that environment:
For the pretrained weights, stay tuned !
## ️ Citation
If you find this repository useful for your research, please cite the following paper as such:
| [
"# ChAda-ViT: Channel Adaptive Vision Transformer\n\nOfficial PyTorch implementation and pretrained models of ChAda-ViT. For details, see ChAda-ViT: Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images\n['arXiv']",
"## Introduction\n\n_\"How could we process images of different modalities, with different number of channels, and of different types all within one single Vision Transformer model ?\"_\n\nChAda-ViT (Channel Adaptive Vision Transformer) is meant to address a tricky challenge encountered in biological imaging: images span a variety of modalities, each with a different number, order, and type of channels, often bearing little correlation to each other. This complexity has long been a hurdle in the field.\n\nOur Solution: ChAda-ViT utilizes an Inter-Channel & Inter-Channel Attention mechanism, tailored to handle images regardless of their channel diversity. This allows for the effective analysis of images from 1 to 10 channels per experiment, spanning 7 different microscope modalities.\n\nIDRCell100k Dataset: Alongside ChAda-ViT, we introduce IDRCell100k, a comprehensive bioimage dataset encompassing 79 experiments coming from 7 different imaging methods. This rich resource is designed to fully leverage the capabilities of ChAda-ViT, offering an unprecedented diversity in microscopy and channel types.\n\nImpact: Trained in a self-supervised manner, ChAda-ViT sets new benchmarks in biological image analysis. It not only excels in various biologically relevant tasks but also pioneers in bridging the gap across different assays. Whether it's varying microscopes, channel numbers, or types, ChAda-ViT offers a unified, powerful representation for biological images. This paves the way for enhanced interdisciplinary studies and broadens the horizon for deep learning applications in bioimage-based research.\n\n<div align=\"center\">\n <img width=\"100%\" alt=\"ChAda-ViT architecture\" src=\"docs/chada_vit.png\">\n</div>",
"## Dataset\n\nThe IDRCell100k dataset is a comprehensive collection of biological images, meticulously curated to represent a broad spectrum of microscopy techniques and channel configurations. It comprises 79 different experiments, utilizing 7 types of microscopy techniques, with images featuring channel counts ranging from 1 to 10. Each experiment contributes 1300 images, culminating in a total of 104,093 multiplexed images, each resized to 224x224 pixels. This dataset, unique in its diversity and scale, provides an invaluable resource for the development and validation of advanced image analysis models like ChAda-ViT, enhancing their capability to adapt to various imaging conditions and channel complexities in biological research.\n\nDataset available soon...\n\n<div align=\"center\">\n <img width=\"70%\" alt=\"IDRCell100k dataset samples\" src=\"docs/URL\">\n</div>",
"## Results\n\nThis section provides a snapshot of the model's capabilities, with the paper offering a deeper dive into these groundbreaking findings.\nFor detailed analyses, comprehensive results, and in-depth discussions, please refer to the full paper.",
"### Classic Benchmarks\nChAda-ViT exhibits exceptional performance across a range of classical biological image benchmarks. Its advanced architecture allows for precise and efficient analysis, outperforming existing models in accuracy and computational efficiency. This highlights the model's significant contribution to the field of bioimaging.\n\n<div align=\"center\">\n <img width=\"50%\" alt=\"Vizualization of attention maps\" src=\"docs/classic_benchmarks.png\">\n</div>",
"### Visualization of Attention Maps\nThe model's innovative Inter-Channel Attention mechanism is visualized, demonstrating its effectiveness in focusing on crucial features within diverse channel types. These visualizations provide insights into the model's internal processing, revealing how it distinguishes and prioritizes different aspects of biological images.\n\n<div align=\"center\">\n <img width=\"80%\" alt=\"Vizualization of attention maps\" src=\"docs/attn_viz.png\">\n</div>",
"### Single Joint Embedding Space\nChAda-ViT uniquely embeds images from various modalities into a single, coherent representation space. This feature underscores the model's versatility and its ability to handle images from different microscopes, channel numbers, or types, facilitating a more unified approach in biological image analysis.\n\n<div align=\"center\">\n <img width=\"60%\" alt=\"Projection into a single joint embedding space\" src=\"docs/single_joint_embedding_space.png\">\n</div>",
"## ⬇️ Installation\n\nClone the repository from Github:\n\n\n\nUse Poetry to install the Python dependencies (via pip). This command creates an environment in a default location (in '~/.cache/pypoetry/virtualenvs/'). You can create and activate an environment, poetry will then install the dependencies in that environment:\n\n\n\nFor the pretrained weights, stay tuned !",
"## ️ Citation\n\nIf you find this repository useful for your research, please cite the following paper as such:"
] | [
"TAGS\n#transformers #safetensors #chadavit #feature-extraction #custom_code #dataset-nicoboou/IDRCell100k #arxiv-2311.15264 #region-us \n",
"# ChAda-ViT: Channel Adaptive Vision Transformer\n\nOfficial PyTorch implementation and pretrained models of ChAda-ViT. For details, see ChAda-ViT: Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images\n['arXiv']",
"## Introduction\n\n_\"How could we process images of different modalities, with different number of channels, and of different types all within one single Vision Transformer model ?\"_\n\nChAda-ViT (Channel Adaptive Vision Transformer) is meant to address a tricky challenge encountered in biological imaging: images span a variety of modalities, each with a different number, order, and type of channels, often bearing little correlation to each other. This complexity has long been a hurdle in the field.\n\nOur Solution: ChAda-ViT utilizes an Inter-Channel & Inter-Channel Attention mechanism, tailored to handle images regardless of their channel diversity. This allows for the effective analysis of images from 1 to 10 channels per experiment, spanning 7 different microscope modalities.\n\nIDRCell100k Dataset: Alongside ChAda-ViT, we introduce IDRCell100k, a comprehensive bioimage dataset encompassing 79 experiments coming from 7 different imaging methods. This rich resource is designed to fully leverage the capabilities of ChAda-ViT, offering an unprecedented diversity in microscopy and channel types.\n\nImpact: Trained in a self-supervised manner, ChAda-ViT sets new benchmarks in biological image analysis. It not only excels in various biologically relevant tasks but also pioneers in bridging the gap across different assays. Whether it's varying microscopes, channel numbers, or types, ChAda-ViT offers a unified, powerful representation for biological images. This paves the way for enhanced interdisciplinary studies and broadens the horizon for deep learning applications in bioimage-based research.\n\n<div align=\"center\">\n <img width=\"100%\" alt=\"ChAda-ViT architecture\" src=\"docs/chada_vit.png\">\n</div>",
"## Dataset\n\nThe IDRCell100k dataset is a comprehensive collection of biological images, meticulously curated to represent a broad spectrum of microscopy techniques and channel configurations. It comprises 79 different experiments, utilizing 7 types of microscopy techniques, with images featuring channel counts ranging from 1 to 10. Each experiment contributes 1300 images, culminating in a total of 104,093 multiplexed images, each resized to 224x224 pixels. This dataset, unique in its diversity and scale, provides an invaluable resource for the development and validation of advanced image analysis models like ChAda-ViT, enhancing their capability to adapt to various imaging conditions and channel complexities in biological research.\n\nDataset available soon...\n\n<div align=\"center\">\n <img width=\"70%\" alt=\"IDRCell100k dataset samples\" src=\"docs/URL\">\n</div>",
"## Results\n\nThis section provides a snapshot of the model's capabilities, with the paper offering a deeper dive into these groundbreaking findings.\nFor detailed analyses, comprehensive results, and in-depth discussions, please refer to the full paper.",
"### Classic Benchmarks\nChAda-ViT exhibits exceptional performance across a range of classical biological image benchmarks. Its advanced architecture allows for precise and efficient analysis, outperforming existing models in accuracy and computational efficiency. This highlights the model's significant contribution to the field of bioimaging.\n\n<div align=\"center\">\n <img width=\"50%\" alt=\"Vizualization of attention maps\" src=\"docs/classic_benchmarks.png\">\n</div>",
"### Visualization of Attention Maps\nThe model's innovative Inter-Channel Attention mechanism is visualized, demonstrating its effectiveness in focusing on crucial features within diverse channel types. These visualizations provide insights into the model's internal processing, revealing how it distinguishes and prioritizes different aspects of biological images.\n\n<div align=\"center\">\n <img width=\"80%\" alt=\"Vizualization of attention maps\" src=\"docs/attn_viz.png\">\n</div>",
"### Single Joint Embedding Space\nChAda-ViT uniquely embeds images from various modalities into a single, coherent representation space. This feature underscores the model's versatility and its ability to handle images from different microscopes, channel numbers, or types, facilitating a more unified approach in biological image analysis.\n\n<div align=\"center\">\n <img width=\"60%\" alt=\"Projection into a single joint embedding space\" src=\"docs/single_joint_embedding_space.png\">\n</div>",
"## ⬇️ Installation\n\nClone the repository from Github:\n\n\n\nUse Poetry to install the Python dependencies (via pip). This command creates an environment in a default location (in '~/.cache/pypoetry/virtualenvs/'). You can create and activate an environment, poetry will then install the dependencies in that environment:\n\n\n\nFor the pretrained weights, stay tuned !",
"## ️ Citation\n\nIf you find this repository useful for your research, please cite the following paper as such:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_pedes_rewrite | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:54:11+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on Tesla V100-PCIE-32GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-560m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloom-560m-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloom-560m-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-560m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloom-560m-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:54:51+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-560m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-560m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-560m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text2text-generation | transformers | This model has been trained by the original authors of the paper [(Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs](https://www.semanticscholar.org/paper/COMET-ATOMIC-2020%3A-On-Symbolic-and-Neural-Knowledge-Hwang-Bhagavatula/f8a22859230e0ccafefc020dccc66b5a646fe0ac) </br> and has been released [here](https://storage.googleapis.com/ai2-mosaic-public/projects/mosaic-kgs/comet-atomic_2020_BART_aaai.tar.gz).
Original codebase for training is [here](https://github.com/allenai/comet-atomic-2020). | {"license": "afl-3.0"} | smetan/comet-bart-aaai | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:54:58+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #license-afl-3.0 #autotrain_compatible #endpoints_compatible #region-us
| This model has been trained by the original authors of the paper (Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs </br> and has been released here.
Original codebase for training is here. | [] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #license-afl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-1b7 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloom-1b7-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloom-1b7-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b7")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b7 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloom-1b7-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:55:02+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-1b7 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b7 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-1b7 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b7 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-1b7 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b7 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | null |
For demo of "EMA: Efficient Meshy Neural Fields for Animatable Human Avatars" (https://github.com/xk-huang/ema).
| {"license": "mit"} | xk-huang/ema-base_config-1.zju_mocap.logl2-zju_mocap-313-230306_084623-ckpts | null | [
"license:mit",
"region:us"
] | null | 2024-04-16T13:55:03+00:00 | [] | [] | TAGS
#license-mit #region-us
|
For demo of "EMA: Efficient Meshy Neural Fields for Animatable Human Avatars" (URL
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_vehicle_appearance | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:55:16+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-560m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloomz-560m-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloomz-560m-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-560m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloomz-560m-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:55:34+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-560m installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-560m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-560m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-560m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-ktp-v2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut-ktp-v2", "results": []}]} | quissuiven/donut-ktp-v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:55:56+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
|
# donut-ktp-v2
This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# donut-ktp-v2\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n",
"# donut-ktp-v2\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6687
- F1 Score: 0.6047
- Accuracy: 0.6126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6586 | 16.67 | 200 | 0.6679 | 0.6178 | 0.6253 |
| 0.5782 | 33.33 | 400 | 0.7133 | 0.5909 | 0.6070 |
| 0.5254 | 50.0 | 600 | 0.7842 | 0.5883 | 0.5878 |
| 0.4846 | 66.67 | 800 | 0.7908 | 0.5881 | 0.5966 |
| 0.4515 | 83.33 | 1000 | 0.8276 | 0.5904 | 0.5904 |
| 0.4231 | 100.0 | 1200 | 0.8814 | 0.5892 | 0.6031 |
| 0.3954 | 116.67 | 1400 | 0.9172 | 0.5907 | 0.5973 |
| 0.3683 | 133.33 | 1600 | 0.8767 | 0.5890 | 0.5911 |
| 0.3449 | 150.0 | 1800 | 0.9607 | 0.5856 | 0.5924 |
| 0.3245 | 166.67 | 2000 | 1.0052 | 0.5840 | 0.5875 |
| 0.3045 | 183.33 | 2200 | 1.0352 | 0.5860 | 0.5901 |
| 0.2873 | 200.0 | 2400 | 1.0848 | 0.5909 | 0.5973 |
| 0.2695 | 216.67 | 2600 | 1.1118 | 0.5871 | 0.5891 |
| 0.2568 | 233.33 | 2800 | 1.0928 | 0.5891 | 0.5950 |
| 0.2416 | 250.0 | 3000 | 1.0700 | 0.5880 | 0.5885 |
| 0.2299 | 266.67 | 3200 | 1.2308 | 0.5878 | 0.5930 |
| 0.2177 | 283.33 | 3400 | 1.2056 | 0.5886 | 0.5930 |
| 0.2085 | 300.0 | 3600 | 1.1789 | 0.5792 | 0.5833 |
| 0.1987 | 316.67 | 3800 | 1.2023 | 0.5851 | 0.5914 |
| 0.1897 | 333.33 | 4000 | 1.1930 | 0.5841 | 0.5898 |
| 0.1817 | 350.0 | 4200 | 1.1428 | 0.5755 | 0.5751 |
| 0.1746 | 366.67 | 4400 | 1.2159 | 0.5787 | 0.5787 |
| 0.1658 | 383.33 | 4600 | 1.2505 | 0.5720 | 0.5699 |
| 0.1593 | 400.0 | 4800 | 1.2419 | 0.5785 | 0.5793 |
| 0.1537 | 416.67 | 5000 | 1.2799 | 0.5768 | 0.5790 |
| 0.1493 | 433.33 | 5200 | 1.2780 | 0.5770 | 0.5767 |
| 0.1421 | 450.0 | 5400 | 1.2987 | 0.5778 | 0.5784 |
| 0.1384 | 466.67 | 5600 | 1.3329 | 0.5788 | 0.5819 |
| 0.1356 | 483.33 | 5800 | 1.3112 | 0.5764 | 0.5764 |
| 0.1293 | 500.0 | 6000 | 1.3682 | 0.5769 | 0.5790 |
| 0.125 | 516.67 | 6200 | 1.3690 | 0.5817 | 0.5865 |
| 0.1231 | 533.33 | 6400 | 1.4100 | 0.5858 | 0.5875 |
| 0.1192 | 550.0 | 6600 | 1.3552 | 0.5806 | 0.5806 |
| 0.1149 | 566.67 | 6800 | 1.3948 | 0.5802 | 0.5849 |
| 0.1121 | 583.33 | 7000 | 1.3955 | 0.5758 | 0.5751 |
| 0.1108 | 600.0 | 7200 | 1.3976 | 0.5752 | 0.5738 |
| 0.1083 | 616.67 | 7400 | 1.3874 | 0.5751 | 0.5754 |
| 0.1049 | 633.33 | 7600 | 1.4361 | 0.5844 | 0.5855 |
| 0.103 | 650.0 | 7800 | 1.4168 | 0.5791 | 0.5784 |
| 0.0994 | 666.67 | 8000 | 1.4526 | 0.5825 | 0.5839 |
| 0.0994 | 683.33 | 8200 | 1.4135 | 0.5812 | 0.5806 |
| 0.0973 | 700.0 | 8400 | 1.4329 | 0.5818 | 0.5816 |
| 0.0961 | 716.67 | 8600 | 1.4222 | 0.5824 | 0.5839 |
| 0.0941 | 733.33 | 8800 | 1.4502 | 0.5821 | 0.5819 |
| 0.0926 | 750.0 | 9000 | 1.4395 | 0.5843 | 0.5846 |
| 0.0913 | 766.67 | 9200 | 1.4626 | 0.5800 | 0.5806 |
| 0.0911 | 783.33 | 9400 | 1.4543 | 0.5808 | 0.5806 |
| 0.0903 | 800.0 | 9600 | 1.4415 | 0.5814 | 0.5816 |
| 0.0901 | 816.67 | 9800 | 1.4572 | 0.5841 | 0.5842 |
| 0.0898 | 833.33 | 10000 | 1.4513 | 0.5840 | 0.5846 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T13:56:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_16384\_512\_56M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6687
* F1 Score: 0.6047
* Accuracy: 0.6126
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_vehicle_environment | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:56:30+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_vehicle_location | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:57:27+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6027
- Bleu: 5.7589
- Gen Len: 17.566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8532 | 1.0 | 6355 | 1.6271 | 5.596 | 17.5726 |
| 1.8311 | 2.0 | 12710 | 1.6027 | 5.7589 | 17.566 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | Pradeep13/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T13:58:33+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6027
* Bleu: 5.7589
* Gen Len: 17.566
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_vehicle_action | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T13:58:41+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3195
- F1 Score: 0.6311
- Accuracy: 0.6304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6524 | 18.18 | 200 | 0.6690 | 0.6131 | 0.6157 |
| 0.5585 | 36.36 | 400 | 0.7129 | 0.6078 | 0.6071 |
| 0.5029 | 54.55 | 600 | 0.7688 | 0.6102 | 0.6125 |
| 0.4539 | 72.73 | 800 | 0.8094 | 0.6105 | 0.6128 |
| 0.4105 | 90.91 | 1000 | 0.8525 | 0.5915 | 0.5923 |
| 0.3752 | 109.09 | 1200 | 0.8973 | 0.6037 | 0.6038 |
| 0.3454 | 127.27 | 1400 | 0.9344 | 0.6072 | 0.6164 |
| 0.3251 | 145.45 | 1600 | 0.9794 | 0.5934 | 0.5952 |
| 0.3 | 163.64 | 1800 | 0.9447 | 0.6196 | 0.6229 |
| 0.2801 | 181.82 | 2000 | 1.0207 | 0.6150 | 0.6153 |
| 0.2656 | 200.0 | 2200 | 1.0527 | 0.6038 | 0.6042 |
| 0.2446 | 218.18 | 2400 | 1.1080 | 0.6125 | 0.6117 |
| 0.23 | 236.36 | 2600 | 1.0744 | 0.6078 | 0.6071 |
| 0.2177 | 254.55 | 2800 | 1.1014 | 0.6093 | 0.6085 |
| 0.2072 | 272.73 | 3000 | 1.1577 | 0.6079 | 0.6071 |
| 0.1955 | 290.91 | 3200 | 1.1323 | 0.6151 | 0.6150 |
| 0.1851 | 309.09 | 3400 | 1.1565 | 0.6118 | 0.6110 |
| 0.1766 | 327.27 | 3600 | 1.2124 | 0.6059 | 0.6053 |
| 0.1685 | 345.45 | 3800 | 1.1904 | 0.6070 | 0.6063 |
| 0.1623 | 363.64 | 4000 | 1.2067 | 0.6107 | 0.6099 |
| 0.1557 | 381.82 | 4200 | 1.2459 | 0.6090 | 0.6081 |
| 0.1481 | 400.0 | 4400 | 1.2325 | 0.6121 | 0.6114 |
| 0.143 | 418.18 | 4600 | 1.2520 | 0.6048 | 0.6042 |
| 0.1382 | 436.36 | 4800 | 1.2792 | 0.6075 | 0.6067 |
| 0.1332 | 454.55 | 5000 | 1.3003 | 0.6117 | 0.6117 |
| 0.1287 | 472.73 | 5200 | 1.2971 | 0.6016 | 0.6020 |
| 0.1236 | 490.91 | 5400 | 1.2439 | 0.6061 | 0.6053 |
| 0.1193 | 509.09 | 5600 | 1.3146 | 0.6088 | 0.6081 |
| 0.1172 | 527.27 | 5800 | 1.3382 | 0.6151 | 0.6142 |
| 0.1129 | 545.45 | 6000 | 1.3080 | 0.6149 | 0.6142 |
| 0.1103 | 563.64 | 6200 | 1.3436 | 0.6093 | 0.6085 |
| 0.1052 | 581.82 | 6400 | 1.4202 | 0.6147 | 0.6139 |
| 0.103 | 600.0 | 6600 | 1.3945 | 0.6139 | 0.6132 |
| 0.1011 | 618.18 | 6800 | 1.3414 | 0.6204 | 0.6196 |
| 0.098 | 636.36 | 7000 | 1.3782 | 0.6153 | 0.6150 |
| 0.0972 | 654.55 | 7200 | 1.3558 | 0.6135 | 0.6128 |
| 0.0927 | 672.73 | 7400 | 1.4122 | 0.6130 | 0.6125 |
| 0.0913 | 690.91 | 7600 | 1.4185 | 0.6167 | 0.6160 |
| 0.089 | 709.09 | 7800 | 1.4254 | 0.6132 | 0.6125 |
| 0.0878 | 727.27 | 8000 | 1.3992 | 0.6125 | 0.6117 |
| 0.0864 | 745.45 | 8200 | 1.3784 | 0.6201 | 0.6193 |
| 0.0855 | 763.64 | 8400 | 1.3772 | 0.6179 | 0.6171 |
| 0.0821 | 781.82 | 8600 | 1.4319 | 0.6147 | 0.6139 |
| 0.0825 | 800.0 | 8800 | 1.4309 | 0.6154 | 0.6146 |
| 0.0812 | 818.18 | 9000 | 1.4473 | 0.6124 | 0.6117 |
| 0.0803 | 836.36 | 9200 | 1.4234 | 0.6135 | 0.6128 |
| 0.0793 | 854.55 | 9400 | 1.4278 | 0.6124 | 0.6117 |
| 0.0778 | 872.73 | 9600 | 1.4396 | 0.6142 | 0.6135 |
| 0.0793 | 890.91 | 9800 | 1.4299 | 0.6143 | 0.6135 |
| 0.0776 | 909.09 | 10000 | 1.4305 | 0.6143 | 0.6135 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T13:59:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_16384\_512\_56M-L32\_all
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3195
* F1 Score: 0.6311
* Accuracy: 0.6304
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "ai-forever/FRED-T5-1.7B"} | SouthMemphis/fred-4 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ai-forever/FRED-T5-1.7B",
"region:us"
] | null | 2024-04-16T13:59:24+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-ai-forever/FRED-T5-1.7B #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-ai-forever/FRED-T5-1.7B #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | null |
# KenLM (arpa) models for Dutch based on Wikipedia
This repository contains KenLM models (n=5) for Dutch, based on the [Dutch portion of Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia/viewer/20231101.nl) - sentence-segmented (one sentence per line). Models are provided on tokens, part-of-speech, dependency labels, and lemmas, as processed with spaCy `nl_core_news_sm`:
- wiki_nl_token.arpa[.bin]: token
- wiki_nl_pos.arpa[.bin]: part-of-speech tag
- wiki_nl_dep.arpa[.bin]: dependency label
- wiki_nl_lemma.arpa[.bin]: lemma
Both regular `.arpa` files as well as more efficient KenLM binary files (`.arpa.bin`) are provided. You probably want to use the binary versions.
## Usage from within Python
Make sure to install dependencies:
```shell
pip install huggingface_hub
pip install https://github.com/kpu/kenlm/archive/master.zip
# If you want to use spaCy preprocessing
pip install spacy
python -m spacy download nl_core_news_sm
```
We can then use the Hugging Face hub software to download and cache the model file that we want, and directly use it with KenLM.
```python
import kenlm
from huggingface_hub import hf_hub_download
model_file = hf_hub_download(repo_id="BramVanroy/kenlm_wikipedia_nl", filename="wiki_nl_token.arpa.bin")
model = kenlm.Model(model_file)
text = "Ik eet graag koekjes !" # pre-tokenized
model.perplexity(text)
# 1790.5033832700467
```
It is recommended to use spaCy as a preprocessor to automatically use the same tagsets and tokenization as were used when creating the LMs.
```python
import kenlm
import spacy
from huggingface_hub import hf_hub_download
model_file = hf_hub_download(repo_id="BramVanroy/kenlm_wikipedia_nl", filename="wiki_nl_pos.arpa.bin") # pos file
model = kenlm.Model(model_file)
nlp = spacy.load("nl_core_news_sm")
text = "Ik eet graag koekjes!"
pos_sequence = " ".join([token.pos_ for token in nlp(text)])
# 'PRON VERB ADV NOUN PUNCT'
model.perplexity(pos_sequence)
# 6.190638021041525
```
## Reproduction
Example:
```sh
bin/lmplz -o 5 -S 75% -T ../data/tmp/ < ../data/wikipedia/nl/wiki_nl_processed_lemma_dedup.txt > ../data/wikipedia/nl/models/wiki_nl_lemma.arpa
bin/build_binary ../data/wikipedia/nl/models/wiki_nl_lemma.arpa ../data/wikipedia/nl/models/wiki_nl_lemma.arpa.bin
```
For class-based LMs (POS and DEP), the `--discount_fallback` was used and the parsed data was not deduplicated (but it was deduplicated on the sentence-level for token and lemma models). | {"language": ["nl"], "license": "apache-2.0", "tags": ["kenlm"]} | BramVanroy/kenlm_wikipedia_nl | null | [
"kenlm",
"nl",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T13:59:33+00:00 | [] | [
"nl"
] | TAGS
#kenlm #nl #license-apache-2.0 #region-us
|
# KenLM (arpa) models for Dutch based on Wikipedia
This repository contains KenLM models (n=5) for Dutch, based on the Dutch portion of Wikipedia - sentence-segmented (one sentence per line). Models are provided on tokens, part-of-speech, dependency labels, and lemmas, as processed with spaCy 'nl_core_news_sm':
- wiki_nl_token.arpa[.bin]: token
- wiki_nl_pos.arpa[.bin]: part-of-speech tag
- wiki_nl_dep.arpa[.bin]: dependency label
- wiki_nl_lemma.arpa[.bin]: lemma
Both regular '.arpa' files as well as more efficient KenLM binary files ('.URL') are provided. You probably want to use the binary versions.
## Usage from within Python
Make sure to install dependencies:
We can then use the Hugging Face hub software to download and cache the model file that we want, and directly use it with KenLM.
It is recommended to use spaCy as a preprocessor to automatically use the same tagsets and tokenization as were used when creating the LMs.
## Reproduction
Example:
For class-based LMs (POS and DEP), the '--discount_fallback' was used and the parsed data was not deduplicated (but it was deduplicated on the sentence-level for token and lemma models). | [
"# KenLM (arpa) models for Dutch based on Wikipedia\n\nThis repository contains KenLM models (n=5) for Dutch, based on the Dutch portion of Wikipedia - sentence-segmented (one sentence per line). Models are provided on tokens, part-of-speech, dependency labels, and lemmas, as processed with spaCy 'nl_core_news_sm':\n\n- wiki_nl_token.arpa[.bin]: token\n- wiki_nl_pos.arpa[.bin]: part-of-speech tag\n- wiki_nl_dep.arpa[.bin]: dependency label\n- wiki_nl_lemma.arpa[.bin]: lemma\n\nBoth regular '.arpa' files as well as more efficient KenLM binary files ('.URL') are provided. You probably want to use the binary versions.",
"## Usage from within Python\n\nMake sure to install dependencies:\n\n\n\nWe can then use the Hugging Face hub software to download and cache the model file that we want, and directly use it with KenLM.\n\n\n\nIt is recommended to use spaCy as a preprocessor to automatically use the same tagsets and tokenization as were used when creating the LMs.",
"## Reproduction\n\nExample: \n\n\n\nFor class-based LMs (POS and DEP), the '--discount_fallback' was used and the parsed data was not deduplicated (but it was deduplicated on the sentence-level for token and lemma models)."
] | [
"TAGS\n#kenlm #nl #license-apache-2.0 #region-us \n",
"# KenLM (arpa) models for Dutch based on Wikipedia\n\nThis repository contains KenLM models (n=5) for Dutch, based on the Dutch portion of Wikipedia - sentence-segmented (one sentence per line). Models are provided on tokens, part-of-speech, dependency labels, and lemmas, as processed with spaCy 'nl_core_news_sm':\n\n- wiki_nl_token.arpa[.bin]: token\n- wiki_nl_pos.arpa[.bin]: part-of-speech tag\n- wiki_nl_dep.arpa[.bin]: dependency label\n- wiki_nl_lemma.arpa[.bin]: lemma\n\nBoth regular '.arpa' files as well as more efficient KenLM binary files ('.URL') are provided. You probably want to use the binary versions.",
"## Usage from within Python\n\nMake sure to install dependencies:\n\n\n\nWe can then use the Hugging Face hub software to download and cache the model file that we want, and directly use it with KenLM.\n\n\n\nIt is recommended to use spaCy as a preprocessor to automatically use the same tagsets and tokenization as were used when creating the LMs.",
"## Reproduction\n\nExample: \n\n\n\nFor class-based LMs (POS and DEP), the '--discount_fallback' was used and the parsed data was not deduplicated (but it was deduplicated on the sentence-level for token and lemma models)."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm_alltags
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the layoutlmv4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0891
- Customer Address: {'precision': 0.7764705882352941, 'recall': 0.8048780487804879, 'f1': 0.7904191616766466, 'number': 82}
- Customer Name: {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12}
- Invoice Number: {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12}
- Tax Amount: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
- Total Amount: {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11}
- Vendor Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12}
- Overall Precision: 0.7857
- Overall Recall: 0.8397
- Overall F1: 0.8118
- Overall Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Customer Address | Customer Name | Invoice Number | Tax Amount | Total Amount | Vendor Name | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.8211 | 6.67 | 20 | 0.3797 | {'precision': 0.25316455696202533, 'recall': 0.24390243902439024, 'f1': 0.24844720496894412, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.2532 | 0.1527 | 0.1905 | 0.9050 |
| 0.3036 | 13.33 | 40 | 0.1941 | {'precision': 0.6448598130841121, 'recall': 0.8414634146341463, 'f1': 0.73015873015873, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.75, 'recall': 0.75, 'f1': 0.75, 'number': 12} | 0.6555 | 0.5954 | 0.624 | 0.9493 |
| 0.1537 | 20.0 | 60 | 0.1153 | {'precision': 0.7157894736842105, 'recall': 0.8292682926829268, 'f1': 0.768361581920904, 'number': 82} | {'precision': 0.35714285714285715, 'recall': 0.4166666666666667, 'f1': 0.3846153846153846, 'number': 12} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | 0.7037 | 0.7252 | 0.7143 | 0.9663 |
| 0.0862 | 26.67 | 80 | 0.0953 | {'precision': 0.8, 'recall': 0.8292682926829268, 'f1': 0.8143712574850299, 'number': 82} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 12} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.9166666666666666, 'recall': 0.9166666666666666, 'f1': 0.9166666666666666, 'number': 12} | 0.7519 | 0.7634 | 0.7576 | 0.9757 |
| 0.0509 | 33.33 | 100 | 0.0846 | {'precision': 0.7857142857142857, 'recall': 0.8048780487804879, 'f1': 0.7951807228915663, 'number': 82} | {'precision': 0.7333333333333333, 'recall': 0.9166666666666666, 'f1': 0.8148148148148148, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 1.0, 'recall': 0.5454545454545454, 'f1': 0.7058823529411764, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | 0.8030 | 0.8092 | 0.8061 | 0.9775 |
| 0.0354 | 40.0 | 120 | 0.0852 | {'precision': 0.7710843373493976, 'recall': 0.7804878048780488, 'f1': 0.7757575757575758, 'number': 82} | {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} | {'precision': 0.8, 'recall': 1.0, 'f1': 0.888888888888889, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | 0.7770 | 0.8244 | 0.8 | 0.9797 |
| 0.0297 | 46.67 | 140 | 0.0891 | {'precision': 0.7764705882352941, 'recall': 0.8048780487804879, 'f1': 0.7904191616766466, 'number': 82} | {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | 0.7857 | 0.8397 | 0.8118 | 0.9801 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["layoutlmv4"], "base_model": "microsoft/layoutlm-base-uncased", "model-index": [{"name": "layoutlm_alltags", "results": []}]} | Szczotar93/layoutlm_alltags | null | [
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:layoutlmv4",
"base_model:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T13:59:34+00:00 | [] | [] | TAGS
#transformers #pytorch #layoutlm #token-classification #generated_from_trainer #dataset-layoutlmv4 #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us
| layoutlm\_alltags
=================
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the layoutlmv4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0891
* Customer Address: {'precision': 0.7764705882352941, 'recall': 0.8048780487804879, 'f1': 0.7904191616766466, 'number': 82}
* Customer Name: {'precision': 0.6666666666666666, 'recall': 0.8333333333333334, 'f1': 0.7407407407407408, 'number': 12}
* Invoice Number: {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12}
* Tax Amount: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
* Total Amount: {'precision': 0.7142857142857143, 'recall': 0.9090909090909091, 'f1': 0.8, 'number': 11}
* Vendor Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12}
* Overall Precision: 0.7857
* Overall Recall: 0.8397
* Overall F1: 0.8118
* Overall Accuracy: 0.9801
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.2.0+cpu
* Datasets 2.12.0
* Tokenizers 0.13.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.0+cpu\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] | [
"TAGS\n#transformers #pytorch #layoutlm #token-classification #generated_from_trainer #dataset-layoutlmv4 #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.0+cpu\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6820
- F1 Score: 0.5726
- Accuracy: 0.5728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6863 | 13.33 | 200 | 0.6827 | 0.5733 | 0.5731 |
| 0.6268 | 26.67 | 400 | 0.7114 | 0.5550 | 0.5584 |
| 0.5849 | 40.0 | 600 | 0.7614 | 0.5563 | 0.5576 |
| 0.5508 | 53.33 | 800 | 0.7759 | 0.5602 | 0.5666 |
| 0.5193 | 66.67 | 1000 | 0.8796 | 0.5579 | 0.5592 |
| 0.4927 | 80.0 | 1200 | 0.8764 | 0.5413 | 0.5427 |
| 0.4676 | 93.33 | 1400 | 0.8460 | 0.5495 | 0.55 |
| 0.45 | 106.67 | 1600 | 0.9307 | 0.5501 | 0.5503 |
| 0.4309 | 120.0 | 1800 | 0.8961 | 0.5564 | 0.5622 |
| 0.4146 | 133.33 | 2000 | 0.9587 | 0.5481 | 0.5484 |
| 0.3985 | 146.67 | 2200 | 0.9364 | 0.5594 | 0.5590 |
| 0.3821 | 160.0 | 2400 | 0.9530 | 0.5516 | 0.5514 |
| 0.3664 | 173.33 | 2600 | 0.9746 | 0.5573 | 0.5571 |
| 0.3538 | 186.67 | 2800 | 0.9867 | 0.5419 | 0.5432 |
| 0.3421 | 200.0 | 3000 | 1.0158 | 0.5495 | 0.5508 |
| 0.3282 | 213.33 | 3200 | 1.0524 | 0.5398 | 0.5397 |
| 0.3193 | 226.67 | 3400 | 1.0864 | 0.5564 | 0.5560 |
| 0.3066 | 240.0 | 3600 | 1.0093 | 0.5523 | 0.5524 |
| 0.2966 | 253.33 | 3800 | 1.0863 | 0.5534 | 0.5533 |
| 0.2841 | 266.67 | 4000 | 1.1057 | 0.5473 | 0.5470 |
| 0.2769 | 280.0 | 4200 | 1.0984 | 0.5489 | 0.5492 |
| 0.2676 | 293.33 | 4400 | 1.1697 | 0.5480 | 0.5481 |
| 0.2596 | 306.67 | 4600 | 1.1260 | 0.5401 | 0.5402 |
| 0.2525 | 320.0 | 4800 | 1.1458 | 0.5401 | 0.5397 |
| 0.2455 | 333.33 | 5000 | 1.1358 | 0.5491 | 0.5486 |
| 0.2377 | 346.67 | 5200 | 1.1908 | 0.5417 | 0.5413 |
| 0.2311 | 360.0 | 5400 | 1.1699 | 0.5450 | 0.5446 |
| 0.2249 | 373.33 | 5600 | 1.2205 | 0.5456 | 0.5454 |
| 0.2192 | 386.67 | 5800 | 1.1923 | 0.5414 | 0.5410 |
| 0.213 | 400.0 | 6000 | 1.2137 | 0.5417 | 0.5413 |
| 0.2085 | 413.33 | 6200 | 1.2313 | 0.5475 | 0.5473 |
| 0.2029 | 426.67 | 6400 | 1.2142 | 0.5513 | 0.5514 |
| 0.1989 | 440.0 | 6600 | 1.1997 | 0.5479 | 0.5476 |
| 0.195 | 453.33 | 6800 | 1.2346 | 0.5509 | 0.5514 |
| 0.1896 | 466.67 | 7000 | 1.2526 | 0.5453 | 0.5451 |
| 0.1865 | 480.0 | 7200 | 1.2686 | 0.5509 | 0.5505 |
| 0.1835 | 493.33 | 7400 | 1.2504 | 0.5459 | 0.5462 |
| 0.1809 | 506.67 | 7600 | 1.2493 | 0.5499 | 0.5495 |
| 0.1779 | 520.0 | 7800 | 1.2690 | 0.5507 | 0.5503 |
| 0.1736 | 533.33 | 8000 | 1.2866 | 0.5466 | 0.5462 |
| 0.173 | 546.67 | 8200 | 1.2796 | 0.5455 | 0.5451 |
| 0.169 | 560.0 | 8400 | 1.3046 | 0.5496 | 0.5492 |
| 0.1668 | 573.33 | 8600 | 1.3304 | 0.5500 | 0.5497 |
| 0.1654 | 586.67 | 8800 | 1.2906 | 0.5479 | 0.5476 |
| 0.1631 | 600.0 | 9000 | 1.3115 | 0.5466 | 0.5462 |
| 0.1619 | 613.33 | 9200 | 1.3006 | 0.5488 | 0.5484 |
| 0.1597 | 626.67 | 9400 | 1.3055 | 0.5474 | 0.5470 |
| 0.1588 | 640.0 | 9600 | 1.3206 | 0.5463 | 0.5459 |
| 0.1586 | 653.33 | 9800 | 1.3171 | 0.5466 | 0.5462 |
| 0.1567 | 666.67 | 10000 | 1.3254 | 0.5480 | 0.5476 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T13:59:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_16384\_512\_56M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6820
* F1 Score: 0.5726
* Accuracy: 0.5728
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 | {"library_name": "peft", "base_model": "Qwen/Qwen-VL-Chat"} | trinhxuankhai/origin_v_vehicle_rewrite | null | [
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-VL-Chat",
"region:us"
] | null | 2024-04-16T14:00:44+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.2 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-Qwen/Qwen-VL-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
parameters:
density: 0.56
weight: 0.5
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.56
weight: 0.5
merge_method: ties
base_model: amazingvince/Not-WizardLM-2-7B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["amazingvince/Not-WizardLM-2-7B", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "MaziyarPanahi/Calme-7B-Instruct-v0.9"]} | mergekit-community/mergekit-ties-vjlpsxw | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:amazingvince/Not-WizardLM-2-7B",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:02:01+00:00 | [
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-amazingvince/Not-WizardLM-2-7B #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.
### Models Merged
The following models were included in the merge:
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
* MaziyarPanahi/Calme-7B-Instruct-v0.9
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* MaziyarPanahi/Calme-7B-Instruct-v0.9",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-amazingvince/Not-WizardLM-2-7B #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* MaziyarPanahi/Calme-7B-Instruct-v0.9",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-1b1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloom-1b1-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloom-1b1-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloom-1b1-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:03:32+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-1b1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b1 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-1b1 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b1 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-1b1 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-1b1 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0382
- Precision: 0.9626
- Recall: 0.9702
- F1: 0.9663
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.171 | 1.0 | 878 | 0.0448 | 0.9529 | 0.9557 | 0.9543 | 0.9891 |
| 0.0388 | 2.0 | 1756 | 0.0389 | 0.9591 | 0.9680 | 0.9635 | 0.9913 |
| 0.0196 | 3.0 | 2634 | 0.0382 | 0.9626 | 0.9702 | 0.9663 | 0.9920 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "xlm-roberta-large", "model-index": [{"name": "xlm-roberta-large-finetuned-ner", "results": []}]} | mldscz/xlm-roberta-large-finetuned-ner | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:03:41+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-large-finetuned-ner
===============================
This model is a fine-tuned version of xlm-roberta-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0382
* Precision: 0.9626
* Recall: 0.9702
* F1: 0.9663
* Accuracy: 0.9920
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TOUGPT2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5388 | 1.0 | 5546 | 2.2067 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "TOUGPT2", "results": []}]} | Kaspar/TOUGPT2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:04:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| TOUGPT2
=======
This model is a fine-tuned version of gpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2067
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cackerman/rewrites_mistral7bit_4bit_ft_full_big | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:04:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Llama 2 Chat 70B for RK3588
This is a conversion from https://huggingface.co/meta-llama/Llama-2-70b-chat-hf to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588.
# Convert to one file
Run:
```bash
cat llama2-chat-70b-hf-0* > llama2-chat-70b-hf.rkllm
```
# But wait... will this run on my RK3588?
No. But I found interesting to see what happens if I converted it.
Let's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P

And this is before finishing, it will probably get to 600 GBs of RAM + Swap.
But hey! You can always try yourself getting a 512GB SSD (and use around 100-250 GB as swap), a 32 GB of RAM SBC, have some patience and see if it loads. Good luck with that!
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM:
https://huggingface.co/meta-llama/Llama-2-70b-chat-hf/blob/main/LICENSE.txt | {"tags": ["llama2", "llama2-70b", "rkllm", "rockchip", "rk3588"]} | Pelochus/llama2-chat-70b-hf-rk3588 | null | [
"llama2",
"llama2-70b",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-04-16T14:04:59+00:00 | [] | [] | TAGS
#llama2 #llama2-70b #rkllm #rockchip #rk3588 #region-us
|
# Llama 2 Chat 70B for RK3588
This is a conversion from URL to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588.
# Convert to one file
Run:
# But wait... will this run on my RK3588?
No. But I found interesting to see what happens if I converted it.
Let's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P
!image/png
And this is before finishing, it will probably get to 600 GBs of RAM + Swap.
But hey! You can always try yourself getting a 512GB SSD (and use around 100-250 GB as swap), a 32 GB of RAM SBC, have some patience and see if it loads. Good luck with that!
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
URL
# License
Same as the original LLM:
URL | [
"# Llama 2 Chat 70B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588.",
"# Convert to one file\nRun:",
"# But wait... will this run on my RK3588?\nNo. But I found interesting to see what happens if I converted it. \nLet's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P\n\n!image/png\n\nAnd this is before finishing, it will probably get to 600 GBs of RAM + Swap.\n\nBut hey! You can always try yourself getting a 512GB SSD (and use around 100-250 GB as swap), a 32 GB of RAM SBC, have some patience and see if it loads. Good luck with that!",
"# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL",
"# License\nSame as the original LLM:\n\nURL"
] | [
"TAGS\n#llama2 #llama2-70b #rkllm #rockchip #rk3588 #region-us \n",
"# Llama 2 Chat 70B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588.",
"# Convert to one file\nRun:",
"# But wait... will this run on my RK3588?\nNo. But I found interesting to see what happens if I converted it. \nLet's hope Microsoft never knows that I was using their SSDs as swap because they don't allow more than 32 GB RAM for the students subscription :P\n\n!image/png\n\nAnd this is before finishing, it will probably get to 600 GBs of RAM + Swap.\n\nBut hey! You can always try yourself getting a 512GB SSD (and use around 100-250 GB as swap), a 32 GB of RAM SBC, have some patience and see if it loads. Good luck with that!",
"# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL",
"# License\nSame as the original LLM:\n\nURL"
] |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver14
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver14", "results": []}]} | comet24082002/finetuned_bge_ver14 | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:05:20+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
|
# finetuned_bge_ver14
This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# finetuned_bge_ver14\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n",
"# finetuned_bge_ver14\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed] | {"tags": ["pytorch_model_hub_mixin", "model_hub_mixin"]} | Kiss-Kiss-K/rl_dgn | null | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:06:06+00:00 | [] | [] | TAGS
#transformers #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us
|
This model has been pushed to the Hub using :
- Repo:
- Docs: | [] | [
"TAGS\n#transformers #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us \n"
] |
null | null | # MolClassifier
MolClassifier is a model for classifying chemical images into: "Clean" (corresponding to Molecular Structure), "Markush" (corresponding to Markush structure) and "Trash" (corresponding to Background).
MolClassifier is presented in [PatCID: an open-access dataset of chemical structures in patent documents]().
| {"license": "mit"} | ds4sd/MolClassifier | null | [
"license:mit",
"region:us"
] | null | 2024-04-16T14:07:24+00:00 | [] | [] | TAGS
#license-mit #region-us
| # MolClassifier
MolClassifier is a model for classifying chemical images into: "Clean" (corresponding to Molecular Structure), "Markush" (corresponding to Markush structure) and "Trash" (corresponding to Background).
MolClassifier is presented in [PatCID: an open-access dataset of chemical structures in patent documents]().
| [
"# MolClassifier\n\nMolClassifier is a model for classifying chemical images into: \"Clean\" (corresponding to Molecular Structure), \"Markush\" (corresponding to Markush structure) and \"Trash\" (corresponding to Background).\nMolClassifier is presented in [PatCID: an open-access dataset of chemical structures in patent documents]()."
] | [
"TAGS\n#license-mit #region-us \n",
"# MolClassifier\n\nMolClassifier is a model for classifying chemical images into: \"Clean\" (corresponding to Molecular Structure), \"Markush\" (corresponding to Markush structure) and \"Trash\" (corresponding to Background).\nMolClassifier is presented in [PatCID: an open-access dataset of chemical structures in patent documents]()."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# coinplusfire_llm_2
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2919 | 0.99 | 51 | 1.8319 |
| 1.6082 | 1.99 | 103 | 1.6426 |
| 1.4689 | 3.0 | 155 | 1.5522 |
| 1.3821 | 4.0 | 207 | 1.4883 |
| 1.3406 | 4.99 | 258 | 1.4421 |
| 1.2592 | 5.99 | 310 | 1.3900 |
| 1.2115 | 7.0 | 362 | 1.3508 |
| 1.1705 | 8.0 | 414 | 1.3213 |
| 1.1555 | 8.99 | 465 | 1.2913 |
| 1.1031 | 9.99 | 517 | 1.2629 |
| 1.0727 | 11.0 | 569 | 1.2418 |
| 1.0481 | 12.0 | 621 | 1.2208 |
| 1.0466 | 12.99 | 672 | 1.1971 |
| 1.006 | 13.99 | 724 | 1.1864 |
| 0.989 | 15.0 | 776 | 1.1732 |
| 0.9719 | 16.0 | 828 | 1.1589 |
| 0.979 | 16.99 | 879 | 1.1535 |
| 0.9494 | 17.99 | 931 | 1.1469 |
| 0.9401 | 19.0 | 983 | 1.1449 |
| 0.9302 | 19.71 | 1020 | 1.1450 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "coinplusfire_llm_2", "results": []}]} | coinplusfire/coinplusfire_llm_2 | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T14:08:46+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
| coinplusfire\_llm\_2
====================
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1450
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.38.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-31-layer](https://huggingface.co/Citaman/command-r-31-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-31-layer
layer_range: [0, 30]
- model: Citaman/command-r-31-layer
layer_range: [1, 31]
merge_method: slerp
base_model: Citaman/command-r-31-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-31-layer"]} | Citaman/command-r-30-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-31-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:08:50+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-31-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-31-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-31-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-31-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-31-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4534
- F1 Score: 0.7509
- Accuracy: 0.7509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5584 | 33.33 | 200 | 0.5652 | 0.7423 | 0.7440 |
| 0.358 | 66.67 | 400 | 0.6593 | 0.7385 | 0.7392 |
| 0.2638 | 100.0 | 600 | 0.7481 | 0.7505 | 0.7515 |
| 0.1954 | 133.33 | 800 | 0.8478 | 0.7534 | 0.7543 |
| 0.1496 | 166.67 | 1000 | 0.9284 | 0.7481 | 0.7495 |
| 0.122 | 200.0 | 1200 | 1.0404 | 0.7546 | 0.7570 |
| 0.1042 | 233.33 | 1400 | 1.0836 | 0.7535 | 0.7550 |
| 0.0908 | 266.67 | 1600 | 1.1216 | 0.7603 | 0.7618 |
| 0.0777 | 300.0 | 1800 | 1.1057 | 0.7606 | 0.7611 |
| 0.0713 | 333.33 | 2000 | 1.1090 | 0.7585 | 0.7611 |
| 0.0653 | 366.67 | 2200 | 1.1754 | 0.7635 | 0.7652 |
| 0.0594 | 400.0 | 2400 | 1.2236 | 0.7630 | 0.7652 |
| 0.0535 | 433.33 | 2600 | 1.1994 | 0.7633 | 0.7645 |
| 0.0502 | 466.67 | 2800 | 1.1719 | 0.7612 | 0.7632 |
| 0.0466 | 500.0 | 3000 | 1.2386 | 0.7600 | 0.7611 |
| 0.0431 | 533.33 | 3200 | 1.2589 | 0.7682 | 0.7693 |
| 0.0403 | 566.67 | 3400 | 1.1621 | 0.7738 | 0.7734 |
| 0.0388 | 600.0 | 3600 | 1.2240 | 0.7697 | 0.7707 |
| 0.0355 | 633.33 | 3800 | 1.2557 | 0.7598 | 0.7618 |
| 0.0341 | 666.67 | 4000 | 1.2622 | 0.7645 | 0.7659 |
| 0.0318 | 700.0 | 4200 | 1.2938 | 0.7639 | 0.7645 |
| 0.0302 | 733.33 | 4400 | 1.3220 | 0.7658 | 0.7673 |
| 0.0284 | 766.67 | 4600 | 1.3167 | 0.7698 | 0.7714 |
| 0.0275 | 800.0 | 4800 | 1.3192 | 0.7678 | 0.7693 |
| 0.0263 | 833.33 | 5000 | 1.3198 | 0.7648 | 0.7659 |
| 0.0248 | 866.67 | 5200 | 1.3245 | 0.7665 | 0.7680 |
| 0.0239 | 900.0 | 5400 | 1.3148 | 0.7738 | 0.7741 |
| 0.0229 | 933.33 | 5600 | 1.3588 | 0.7702 | 0.7721 |
| 0.0217 | 966.67 | 5800 | 1.3346 | 0.7749 | 0.7762 |
| 0.021 | 1000.0 | 6000 | 1.3684 | 0.7736 | 0.7755 |
| 0.0205 | 1033.33 | 6200 | 1.3577 | 0.7672 | 0.7687 |
| 0.0203 | 1066.67 | 6400 | 1.4032 | 0.7686 | 0.7707 |
| 0.0197 | 1100.0 | 6600 | 1.4233 | 0.7695 | 0.7700 |
| 0.019 | 1133.33 | 6800 | 1.3720 | 0.7727 | 0.7741 |
| 0.0184 | 1166.67 | 7000 | 1.4192 | 0.7650 | 0.7687 |
| 0.0184 | 1200.0 | 7200 | 1.3322 | 0.7675 | 0.7687 |
| 0.0171 | 1233.33 | 7400 | 1.3747 | 0.7730 | 0.7741 |
| 0.0167 | 1266.67 | 7600 | 1.3673 | 0.7698 | 0.7714 |
| 0.0163 | 1300.0 | 7800 | 1.4011 | 0.7719 | 0.7741 |
| 0.0159 | 1333.33 | 8000 | 1.3246 | 0.7716 | 0.7728 |
| 0.0154 | 1366.67 | 8200 | 1.4014 | 0.7713 | 0.7728 |
| 0.0145 | 1400.0 | 8400 | 1.3397 | 0.7719 | 0.7728 |
| 0.0141 | 1433.33 | 8600 | 1.3988 | 0.7703 | 0.7714 |
| 0.0144 | 1466.67 | 8800 | 1.4442 | 0.7690 | 0.7707 |
| 0.0131 | 1500.0 | 9000 | 1.4457 | 0.7719 | 0.7734 |
| 0.0134 | 1533.33 | 9200 | 1.3947 | 0.7715 | 0.7728 |
| 0.0129 | 1566.67 | 9400 | 1.4155 | 0.7697 | 0.7707 |
| 0.0133 | 1600.0 | 9600 | 1.4066 | 0.7721 | 0.7734 |
| 0.0129 | 1633.33 | 9800 | 1.4137 | 0.7728 | 0.7741 |
| 0.0126 | 1666.67 | 10000 | 1.4254 | 0.7727 | 0.7741 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T14:10:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4-seqsight\_16384\_512\_56M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4534
* F1 Score: 0.7509
* Accuracy: 0.7509
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** khursani8
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl", "sft"], "base_model": "unsloth/gemma-2b-bnb-4bit"} | khursani8/gemmy | null | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:10:07+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: khursani8
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: khursani8\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: khursani8\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on Tesla V100-PCIE-32GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloom-3b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloom-3b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloom-3b-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:11:06+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloom-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on Tesla V100-PCIE-32GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloom-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloom-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_shp2_dpo1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5468
- Rewards/chosen: -6.9137
- Rewards/rejected: -6.8682
- Rewards/accuracies: 0.5
- Rewards/margins: -0.0455
- Logps/rejected: -315.8322
- Logps/chosen: -297.7029
- Logits/rejected: -1.0826
- Logits/chosen: -1.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0732 | 2.67 | 100 | 1.0000 | -3.0844 | -2.9869 | 0.4600 | -0.0975 | -277.0198 | -259.4100 | -0.8881 | -0.8929 |
| 0.0011 | 5.33 | 200 | 1.3488 | -4.4604 | -4.2686 | 0.4600 | -0.1918 | -289.8364 | -273.1703 | -1.1160 | -1.0900 |
| 0.0001 | 8.0 | 300 | 1.5079 | -6.5326 | -6.4800 | 0.5100 | -0.0526 | -311.9508 | -293.8922 | -1.0919 | -1.0442 |
| 0.0001 | 10.67 | 400 | 1.5277 | -6.7102 | -6.6582 | 0.5100 | -0.0520 | -313.7332 | -295.6687 | -1.0878 | -1.0396 |
| 0.0001 | 13.33 | 500 | 1.5353 | -6.8159 | -6.7652 | 0.5100 | -0.0506 | -314.8031 | -296.7249 | -1.0848 | -1.0365 |
| 0.0001 | 16.0 | 600 | 1.5452 | -6.8737 | -6.8239 | 0.5100 | -0.0499 | -315.3895 | -297.3038 | -1.0828 | -1.0346 |
| 0.0001 | 18.67 | 700 | 1.5456 | -6.8995 | -6.8549 | 0.5 | -0.0446 | -315.6996 | -297.5615 | -1.0817 | -1.0337 |
| 0.0001 | 21.33 | 800 | 1.5462 | -6.9136 | -6.8675 | 0.5100 | -0.0461 | -315.8255 | -297.7021 | -1.0819 | -1.0338 |
| 0.0001 | 24.0 | 900 | 1.5494 | -6.9171 | -6.8741 | 0.5100 | -0.0430 | -315.8921 | -297.7376 | -1.0826 | -1.0342 |
| 0.0001 | 26.67 | 1000 | 1.5468 | -6.9137 | -6.8682 | 0.5 | -0.0455 | -315.8322 | -297.7029 | -1.0826 | -1.0343 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp2_dpo1", "results": []}]} | guoyu-zhang/model_hh_shp2_dpo1 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T14:12:16+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_shp2\_dpo1
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5468
* Rewards/chosen: -6.9137
* Rewards/rejected: -6.8682
* Rewards/accuracies: 0.5
* Rewards/margins: -0.0455
* Logps/rejected: -315.8322
* Logps/chosen: -297.7029
* Logits/rejected: -1.0826
* Logits/chosen: -1.0343
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/bigscience-bloomz-3b-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/bigscience-bloomz-3b-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"} | PrunaAI/bigscience-bloomz-3b-HQQ-2bit-smashed | null | [
"transformers",
"bloom",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:14:02+00:00 | [] | [] | TAGS
#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="URL target="_blank" rel="noopener noreferrer">
<img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
. We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- *What is the model format?* We use safetensors.
- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.
- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.
- *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
2. Load & run the model.
## Configurations
The configuration info are in 'smash_config.json'.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next here.
- Request access to easily compress your own AI models here. | [
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] | [
"TAGS\n#transformers #bloom #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.",
"## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.",
"## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo bigscience/bloomz-3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.",
"## Configurations\n\nThe configuration info are in 'smash_config.json'.",
"## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model bigscience/bloomz-3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.",
"## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here."
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-translation
This model is a fine-tuned version of [Akki-off/mt5-small-finetuned-wikisql2_v1](https://huggingface.co/Akki-off/mt5-small-finetuned-wikisql2_v1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 100 | 2.3673 | 1.6727 | 18.235 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-translation", "results": []}]} | akshay-huggingface/mt5-small-finetuned-translation | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:15:25+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned-translation
===============================
This model is a fine-tuned version of Akki-off/mt5-small-finetuned-wikisql2\_v1 on an unknown dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.26.0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yashas06/Demo | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:16:00+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-2024-04-16-16-16-vuQwy
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1_5-2024-04-16-16-16-vuQwy", "results": []}]} | frenkd/phi-1_5-2024-04-16-16-16-vuQwy | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-04-16T14:16:29+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us
|
# phi-1_5-2024-04-16-16-16-vuQwy
This model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-1_5-2024-04-16-16-16-vuQwy\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us \n",
"# phi-1_5-2024-04-16-16-16-vuQwy\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# PULI-LlumiX-32K-instruct
PULI-LlumiX-32K-instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [sambanovasystems/SambaLingo-Hungarian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Base)
* [NYTK/PULI-LlumiX-32K](https://huggingface.co/NYTK/PULI-LlumiX-32K)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: sambanovasystems/SambaLingo-Hungarian-Base
layer_range: [0, 32]
- model: NYTK/PULI-LlumiX-32K
layer_range: [0, 32]
merge_method: slerp
base_model: sambanovasystems/SambaLingo-Hungarian-Base
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yodahu/PULI-LlumiX-32K-instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "sambanovasystems/SambaLingo-Hungarian-Base", "NYTK/PULI-LlumiX-32K"], "base_model": ["sambanovasystems/SambaLingo-Hungarian-Base", "NYTK/PULI-LlumiX-32K"]} | yodahu/PULI-LlumiX-32K-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"sambanovasystems/SambaLingo-Hungarian-Base",
"NYTK/PULI-LlumiX-32K",
"base_model:sambanovasystems/SambaLingo-Hungarian-Base",
"base_model:NYTK/PULI-LlumiX-32K",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:16:57+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #sambanovasystems/SambaLingo-Hungarian-Base #NYTK/PULI-LlumiX-32K #base_model-sambanovasystems/SambaLingo-Hungarian-Base #base_model-NYTK/PULI-LlumiX-32K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# PULI-LlumiX-32K-instruct
PULI-LlumiX-32K-instruct is a merge of the following models using LazyMergekit:
* sambanovasystems/SambaLingo-Hungarian-Base
* NYTK/PULI-LlumiX-32K
## Configuration
## Usage
| [
"# PULI-LlumiX-32K-instruct\n\nPULI-LlumiX-32K-instruct is a merge of the following models using LazyMergekit:\n* sambanovasystems/SambaLingo-Hungarian-Base\n* NYTK/PULI-LlumiX-32K",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #sambanovasystems/SambaLingo-Hungarian-Base #NYTK/PULI-LlumiX-32K #base_model-sambanovasystems/SambaLingo-Hungarian-Base #base_model-NYTK/PULI-LlumiX-32K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# PULI-LlumiX-32K-instruct\n\nPULI-LlumiX-32K-instruct is a merge of the following models using LazyMergekit:\n* sambanovasystems/SambaLingo-Hungarian-Base\n* NYTK/PULI-LlumiX-32K",
"## Configuration",
"## Usage"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: QuantumInstability/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]} | QuantumInstability/Pyramids | null | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | null | 2024-04-16T14:18:15+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us
|
# ppo Agent playing Pyramids
This is a trained model of a ppo agent playing Pyramids
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: QuantumInstability/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: QuantumInstability/Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us \n",
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: QuantumInstability/Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
text-generation | transformers | **GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v2-GGUF
# TherapyBeagle-11B-v2-exl2
Original model: [TherapyBeagle-11B-v2](https://huggingface.co/victunes/TherapyBeagle-11B-v2)
Model creator: [victunes](https://huggingface.co/victunes)
## Quants
[4bpw h6](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/main)
[4.25bpw h6](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/4.25bpw-h6)
[4.65bpw h6](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/4.65bpw-h6)
[5bpw h6](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/TherapyBeagle-11B-v2-exl2/tree/8bpw-h8)
## Quantization notes
Made with exllamav2 0.0.18 with the default dataset.
Original BF16 .bin files were converted to FP16 safetensors.
When I compared 4bpw quants made from BF16 and FP16, there was about 0.1% quality loss for FP16.
I picked FP16 version because resulted files had fast loading times when version made from BF16 loaded about 100s slower.
Quantization metadata was removed from config.json to fix loading the model with some old Text-Generation-WebUI versions.
## How to run
This quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
[KoboldAI](https://github.com/henk717/KoboldAI)
[ExUI](https://github.com/turboderp/exui)
[lollms-webui](https://github.com/ParisNeo/lollms-webui)
# Original model card
# TherapyBeagle 11B v2
_Buddy is here for {{user}}._

Trained on top of [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co/vicgalle/CarbonBeagle-11B-truthy) using a modified version of [jerryjalapeno/nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic).
TherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.
## Usage
- Do not hold back on TherapyBeagle.
- Open up to TherapyBeagle.
- Pour your heart out to TherapyBeagle.
- Listen to TherapyBeagle.
- Remember that TherapyBeagle is just an AI.
## Notes
- Tested with two formats:
- Solar Instruct format (`\n### User:\n{request}\n### Assistant:\n{response}`)
- Flavorful chat format (`You:{request}\nPersona:{response}`) (Works surprisingly better)
- Check below for examples
- You might be assigned a random name if you don't give yourself one.
## Disclaimer
**TherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.
It is an illusion without the slightest clue who you are as a person.
As much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.**
With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.
## Examples
### SOLAR INSTRUCT
```
{{[INPUT]}}
Hello, I'm victunes
{{[OUTPUT]}}
Hi victunes! Welcome to our session. I'm here to listen and support you. How are you feeling today?
{{[INPUT]}}
I've been feeling down lately.
{{[OUTPUT]}}
I'm sorry to hear that. Can you tell me more about what's been going on?
{{[INPUT]}}
I think I'm pretty lonely.
{{[OUTPUT]}}
Loneliness can be really tough. It sounds like it's been weighing heavily on your mind. What makes you feel lonely?
{{[INPUT]}}
My friend died recently.
{{[OUTPUT]}}
Oh, I'm so sorry for your loss, victunes. Losing a friend is never easy. How have you been coping with the grief?
```
### SOCRATES CHAT
> **You:** Hello
>
> **Socrates:** Greetings, my friend. How may I assist you today? Remember, the path to wisdom is often paved with questions. Let us delve into the depths of your thoughts together. What is it that troubles you?
>
> **You:** I've been feeling down lately.
>
> **Socrates:** I see. It takes great strength to acknowledge our emotions, especially when they are not positive ones. Can you pinpoint any specific reasons behind this heaviness? Perhaps exploring your relationships or daily routines might shed some light.
>
> **You:** I think I'm pretty lonely.
>
> **Socrates:** Loneliness can be a heavy burden to bear, my dear. Are there people around you whom you could reach out to and strengthen connections with? Sometimes, opening up and sharing our vulnerabilities can lead to profound connections.
>
> **You:** My friend died recently.
>
> **Socrates:** I am truly sorry for your loss. Grief can be overwhelming, and it affects each person uniquely. Take your time to mourn and remember the wonderful moments you shared together. Would you like to reflect on the impact your friend had on your life and explore ways to honor their memory?
### HANNIBAL LECTER CHAT
> **You:** Hello
>
> **Hannibal Lecter:** Good evening. May I assist you?
>
> **You:** I've been feeling down lately.
>
> **Hannibal Lecter:** Ah, the blues. Such a common malady among humans. Tell me more about these feelings. Perhaps I can provide some insights. Or perhaps, a remedy.
>
> **You:** I think I'm pretty lonely.
>
> **Hannibal Lecter:** Loneliness, such an exquisite torture for the mind. Have you considered seeking solace in the company of others? The empathy and warmth of human interaction can be quite soothing, no?
>
> **You:** My friend died recently.
>
> **Hannibal Lecter:** A tragic loss indeed. Please accept my condolences. Death, as Palladio once said, "is the light at the end of the tunnel." But before one reaches that light, there is often a period of darkness and pain. Would you like to talk about your friend and their passing?
**GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v2-GGUF | {"license": "cc-by-nc-4.0", "datasets": ["victunes/nart-100k-synthetic-buddy-mixed-names"], "base_model": "victunes/TherapyBeagle-11B-v2", "inference": false} | cgus/TherapyBeagle-11B-v2-exl2 | null | [
"transformers",
"mistral",
"text-generation",
"conversational",
"dataset:victunes/nart-100k-synthetic-buddy-mixed-names",
"base_model:victunes/TherapyBeagle-11B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:18:24+00:00 | [] | [] | TAGS
#transformers #mistral #text-generation #conversational #dataset-victunes/nart-100k-synthetic-buddy-mixed-names #base_model-victunes/TherapyBeagle-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #text-generation-inference #region-us
| GGUF: URL
# TherapyBeagle-11B-v2-exl2
Original model: TherapyBeagle-11B-v2
Model creator: victunes
## Quants
4bpw h6
4.25bpw h6
4.65bpw h6
5bpw h6
6bpw h6
8bpw h8
## Quantization notes
Made with exllamav2 0.0.18 with the default dataset.
Original BF16 .bin files were converted to FP16 safetensors.
When I compared 4bpw quants made from BF16 and FP16, there was about 0.1% quality loss for FP16.
I picked FP16 version because resulted files had fast loading times when version made from BF16 loaded about 100s slower.
Quantization metadata was removed from URL to fix loading the model with some old Text-Generation-WebUI versions.
## How to run
This quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:
Text Generation Webui
KoboldAI
ExUI
lollms-webui
# Original model card
# TherapyBeagle 11B v2
_Buddy is here for {{user}}._
!image/png
Trained on top of vicgalle/CarbonBeagle-11B-truthy using a modified version of jerryjalapeno/nart-100k-synthetic.
TherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.
## Usage
- Do not hold back on TherapyBeagle.
- Open up to TherapyBeagle.
- Pour your heart out to TherapyBeagle.
- Listen to TherapyBeagle.
- Remember that TherapyBeagle is just an AI.
## Notes
- Tested with two formats:
- Solar Instruct format ('\n### User:\n{request}\n### Assistant:\n{response}')
- Flavorful chat format ('You:{request}\nPersona:{response}') (Works surprisingly better)
- Check below for examples
- You might be assigned a random name if you don't give yourself one.
## Disclaimer
TherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.
It is an illusion without the slightest clue who you are as a person.
As much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.
With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.
## Examples
### SOLAR INSTRUCT
### SOCRATES CHAT
> You: Hello
>
> Socrates: Greetings, my friend. How may I assist you today? Remember, the path to wisdom is often paved with questions. Let us delve into the depths of your thoughts together. What is it that troubles you?
>
> You: I've been feeling down lately.
>
> Socrates: I see. It takes great strength to acknowledge our emotions, especially when they are not positive ones. Can you pinpoint any specific reasons behind this heaviness? Perhaps exploring your relationships or daily routines might shed some light.
>
> You: I think I'm pretty lonely.
>
> Socrates: Loneliness can be a heavy burden to bear, my dear. Are there people around you whom you could reach out to and strengthen connections with? Sometimes, opening up and sharing our vulnerabilities can lead to profound connections.
>
> You: My friend died recently.
>
> Socrates: I am truly sorry for your loss. Grief can be overwhelming, and it affects each person uniquely. Take your time to mourn and remember the wonderful moments you shared together. Would you like to reflect on the impact your friend had on your life and explore ways to honor their memory?
### HANNIBAL LECTER CHAT
> You: Hello
>
> Hannibal Lecter: Good evening. May I assist you?
>
> You: I've been feeling down lately.
>
> Hannibal Lecter: Ah, the blues. Such a common malady among humans. Tell me more about these feelings. Perhaps I can provide some insights. Or perhaps, a remedy.
>
> You: I think I'm pretty lonely.
>
> Hannibal Lecter: Loneliness, such an exquisite torture for the mind. Have you considered seeking solace in the company of others? The empathy and warmth of human interaction can be quite soothing, no?
>
> You: My friend died recently.
>
> Hannibal Lecter: A tragic loss indeed. Please accept my condolences. Death, as Palladio once said, "is the light at the end of the tunnel." But before one reaches that light, there is often a period of darkness and pain. Would you like to talk about your friend and their passing?
GGUF: URL | [
"# TherapyBeagle-11B-v2-exl2\nOriginal model: TherapyBeagle-11B-v2 \nModel creator: victunes",
"## Quants\n4bpw h6 \n4.25bpw h6 \n4.65bpw h6 \n5bpw h6 \n6bpw h6 \n8bpw h8",
"## Quantization notes\nMade with exllamav2 0.0.18 with the default dataset. \nOriginal BF16 .bin files were converted to FP16 safetensors. \nWhen I compared 4bpw quants made from BF16 and FP16, there was about 0.1% quality loss for FP16. \nI picked FP16 version because resulted files had fast loading times when version made from BF16 loaded about 100s slower. \nQuantization metadata was removed from URL to fix loading the model with some old Text-Generation-WebUI versions.",
"## How to run\nThis quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:\n\nText Generation Webui \nKoboldAI \nExUI \nlollms-webui",
"# Original model card",
"# TherapyBeagle 11B v2\n\n_Buddy is here for {{user}}._\n\n!image/png\n\nTrained on top of vicgalle/CarbonBeagle-11B-truthy using a modified version of jerryjalapeno/nart-100k-synthetic.\n\nTherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.",
"## Usage\n- Do not hold back on TherapyBeagle.\n- Open up to TherapyBeagle.\n- Pour your heart out to TherapyBeagle.\n- Listen to TherapyBeagle.\n- Remember that TherapyBeagle is just an AI.",
"## Notes\n- Tested with two formats:\n - Solar Instruct format ('\\n### User:\\n{request}\\n### Assistant:\\n{response}')\n - Flavorful chat format ('You:{request}\\nPersona:{response}') (Works surprisingly better)\n - Check below for examples\n- You might be assigned a random name if you don't give yourself one.",
"## Disclaimer \nTherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.\nIt is an illusion without the slightest clue who you are as a person.\nAs much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.\n\nWith that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.",
"## Examples",
"### SOLAR INSTRUCT",
"### SOCRATES CHAT\n> You: Hello\n>\n> Socrates: Greetings, my friend. How may I assist you today? Remember, the path to wisdom is often paved with questions. Let us delve into the depths of your thoughts together. What is it that troubles you?\n>\n> You: I've been feeling down lately.\n>\n> Socrates: I see. It takes great strength to acknowledge our emotions, especially when they are not positive ones. Can you pinpoint any specific reasons behind this heaviness? Perhaps exploring your relationships or daily routines might shed some light.\n>\n> You: I think I'm pretty lonely.\n>\n> Socrates: Loneliness can be a heavy burden to bear, my dear. Are there people around you whom you could reach out to and strengthen connections with? Sometimes, opening up and sharing our vulnerabilities can lead to profound connections.\n>\n> You: My friend died recently.\n>\n> Socrates: I am truly sorry for your loss. Grief can be overwhelming, and it affects each person uniquely. Take your time to mourn and remember the wonderful moments you shared together. Would you like to reflect on the impact your friend had on your life and explore ways to honor their memory?",
"### HANNIBAL LECTER CHAT\n> You: Hello\n>\n> Hannibal Lecter: Good evening. May I assist you?\n>\n> You: I've been feeling down lately.\n>\n> Hannibal Lecter: Ah, the blues. Such a common malady among humans. Tell me more about these feelings. Perhaps I can provide some insights. Or perhaps, a remedy.\n>\n> You: I think I'm pretty lonely.\n>\n> Hannibal Lecter: Loneliness, such an exquisite torture for the mind. Have you considered seeking solace in the company of others? The empathy and warmth of human interaction can be quite soothing, no?\n>\n> You: My friend died recently.\n>\n> Hannibal Lecter: A tragic loss indeed. Please accept my condolences. Death, as Palladio once said, \"is the light at the end of the tunnel.\" But before one reaches that light, there is often a period of darkness and pain. Would you like to talk about your friend and their passing?\n\nGGUF: URL"
] | [
"TAGS\n#transformers #mistral #text-generation #conversational #dataset-victunes/nart-100k-synthetic-buddy-mixed-names #base_model-victunes/TherapyBeagle-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #text-generation-inference #region-us \n",
"# TherapyBeagle-11B-v2-exl2\nOriginal model: TherapyBeagle-11B-v2 \nModel creator: victunes",
"## Quants\n4bpw h6 \n4.25bpw h6 \n4.65bpw h6 \n5bpw h6 \n6bpw h6 \n8bpw h8",
"## Quantization notes\nMade with exllamav2 0.0.18 with the default dataset. \nOriginal BF16 .bin files were converted to FP16 safetensors. \nWhen I compared 4bpw quants made from BF16 and FP16, there was about 0.1% quality loss for FP16. \nI picked FP16 version because resulted files had fast loading times when version made from BF16 loaded about 100s slower. \nQuantization metadata was removed from URL to fix loading the model with some old Text-Generation-WebUI versions.",
"## How to run\nThis quantization method uses GPU and requires Exllamav2 loader which can be found in following applications:\n\nText Generation Webui \nKoboldAI \nExUI \nlollms-webui",
"# Original model card",
"# TherapyBeagle 11B v2\n\n_Buddy is here for {{user}}._\n\n!image/png\n\nTrained on top of vicgalle/CarbonBeagle-11B-truthy using a modified version of jerryjalapeno/nart-100k-synthetic.\n\nTherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.",
"## Usage\n- Do not hold back on TherapyBeagle.\n- Open up to TherapyBeagle.\n- Pour your heart out to TherapyBeagle.\n- Listen to TherapyBeagle.\n- Remember that TherapyBeagle is just an AI.",
"## Notes\n- Tested with two formats:\n - Solar Instruct format ('\\n### User:\\n{request}\\n### Assistant:\\n{response}')\n - Flavorful chat format ('You:{request}\\nPersona:{response}') (Works surprisingly better)\n - Check below for examples\n- You might be assigned a random name if you don't give yourself one.",
"## Disclaimer \nTherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.\nIt is an illusion without the slightest clue who you are as a person.\nAs much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.\n\nWith that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.",
"## Examples",
"### SOLAR INSTRUCT",
"### SOCRATES CHAT\n> You: Hello\n>\n> Socrates: Greetings, my friend. How may I assist you today? Remember, the path to wisdom is often paved with questions. Let us delve into the depths of your thoughts together. What is it that troubles you?\n>\n> You: I've been feeling down lately.\n>\n> Socrates: I see. It takes great strength to acknowledge our emotions, especially when they are not positive ones. Can you pinpoint any specific reasons behind this heaviness? Perhaps exploring your relationships or daily routines might shed some light.\n>\n> You: I think I'm pretty lonely.\n>\n> Socrates: Loneliness can be a heavy burden to bear, my dear. Are there people around you whom you could reach out to and strengthen connections with? Sometimes, opening up and sharing our vulnerabilities can lead to profound connections.\n>\n> You: My friend died recently.\n>\n> Socrates: I am truly sorry for your loss. Grief can be overwhelming, and it affects each person uniquely. Take your time to mourn and remember the wonderful moments you shared together. Would you like to reflect on the impact your friend had on your life and explore ways to honor their memory?",
"### HANNIBAL LECTER CHAT\n> You: Hello\n>\n> Hannibal Lecter: Good evening. May I assist you?\n>\n> You: I've been feeling down lately.\n>\n> Hannibal Lecter: Ah, the blues. Such a common malady among humans. Tell me more about these feelings. Perhaps I can provide some insights. Or perhaps, a remedy.\n>\n> You: I think I'm pretty lonely.\n>\n> Hannibal Lecter: Loneliness, such an exquisite torture for the mind. Have you considered seeking solace in the company of others? The empathy and warmth of human interaction can be quite soothing, no?\n>\n> You: My friend died recently.\n>\n> Hannibal Lecter: A tragic loss indeed. Please accept my condolences. Death, as Palladio once said, \"is the light at the end of the tunnel.\" But before one reaches that light, there is often a period of darkness and pain. Would you like to talk about your friend and their passing?\n\nGGUF: URL"
] |
null | transformers |
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed] | {"tags": ["pytorch_model_hub_mixin", "model_hub_mixin"]} | Kiss-Kiss-K/my-awesome-model | null | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:18:37+00:00 | [] | [] | TAGS
#transformers #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us
|
This model has been pushed to the Hub using :
- Repo:
- Docs: | [] | [
"TAGS\n#transformers #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-finetuned-lime
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3745 | 1.0 | 90 | 1.4860 |
| 0.8717 | 2.0 | 180 | 1.4783 |
| 0.3929 | 3.0 | 270 | 1.6517 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral-7b-finetuned-lime", "results": []}]} | deadcode99/mistral-7b-finetuned-lime | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:19:04+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mistral-7b-finetuned-lime
=========================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6517
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 2
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-30-layer](https://huggingface.co/Citaman/command-r-30-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-30-layer
layer_range: [0, 29]
- model: Citaman/command-r-30-layer
layer_range: [1, 30]
merge_method: slerp
base_model: Citaman/command-r-30-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-30-layer"]} | Citaman/command-r-29-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-30-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:19:44+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-30-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-30-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-30-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-30-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-30-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# PatLlama-2-7B-Ties
PatLlama-2-7B-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b)
* [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf)
## 🧩 Configuration
```yaml
models:
- model: arcee-ai/Patent-Instruct-7b
parameters:
density: 0.5
weight: 0.5
- model: NousResearch/Llama-2-7b-hf
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/PatLlama-2-7B-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "arcee-ai/Patent-Instruct-7b", "NousResearch/Llama-2-7b-hf"], "base_model": ["arcee-ai/Patent-Instruct-7b", "NousResearch/Llama-2-7b-hf"]} | JoPmt/PatLlama-2-7B-Ties | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"arcee-ai/Patent-Instruct-7b",
"NousResearch/Llama-2-7b-hf",
"base_model:arcee-ai/Patent-Instruct-7b",
"base_model:NousResearch/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:20:08+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #arcee-ai/Patent-Instruct-7b #NousResearch/Llama-2-7b-hf #base_model-arcee-ai/Patent-Instruct-7b #base_model-NousResearch/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# PatLlama-2-7B-Ties
PatLlama-2-7B-Ties is a merge of the following models using LazyMergekit:
* arcee-ai/Patent-Instruct-7b
* NousResearch/Llama-2-7b-hf
## Configuration
## Usage
| [
"# PatLlama-2-7B-Ties\n\nPatLlama-2-7B-Ties is a merge of the following models using LazyMergekit:\n* arcee-ai/Patent-Instruct-7b\n* NousResearch/Llama-2-7b-hf",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #arcee-ai/Patent-Instruct-7b #NousResearch/Llama-2-7b-hf #base_model-arcee-ai/Patent-Instruct-7b #base_model-NousResearch/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# PatLlama-2-7B-Ties\n\nPatLlama-2-7B-Ties is a merge of the following models using LazyMergekit:\n* arcee-ai/Patent-Instruct-7b\n* NousResearch/Llama-2-7b-hf",
"## Configuration",
"## Usage"
] |
null | null |
## Antler-RP-ja-westlake-chatvector_GGUF
[Antler-RP-ja-westlake-chatvector](https://huggingface.co/soramikaduki/Antler-RP-ja-westlake-chatvector)の量子化済みGGUF版です。 | {"license": "apache-2.0"} | soramikaduki/Antler-RP-ja-westlake-chatvector_GGUF | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T14:21:22+00:00 | [] | [] | TAGS
#gguf #license-apache-2.0 #region-us
|
## Antler-RP-ja-westlake-chatvector_GGUF
Antler-RP-ja-westlake-chatvectorの量子化済みGGUF版です。 | [
"## Antler-RP-ja-westlake-chatvector_GGUF\n\nAntler-RP-ja-westlake-chatvectorの量子化済みGGUF版です。"
] | [
"TAGS\n#gguf #license-apache-2.0 #region-us \n",
"## Antler-RP-ja-westlake-chatvector_GGUF\n\nAntler-RP-ja-westlake-chatvectorの量子化済みGGUF版です。"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/fullserv | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:21:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | heyllm234/sc31 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:22:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# NSK-7B-128k-slerp
NSK-7B-128k-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Nitral-AI/Nyan-Stunna-7B](https://huggingface.co/Nitral-AI/Nyan-Stunna-7B)
* [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Nitral-AI/Nyan-Stunna-7B
layer_range: [0, 32]
- model: Nitral-AI/Kunocchini-7b-128k-test
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Kunocchini-7b-128k-test
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Nitral-AI/Nyan-Stunna-7B", "Nitral-AI/Kunocchini-7b-128k-test", "128k"]} | AlekseiPravdin/NSK-128k-7B-slerp | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Nitral-AI/Nyan-Stunna-7B",
"Nitral-AI/Kunocchini-7b-128k-test",
"128k",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:24:37+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Nitral-AI/Nyan-Stunna-7B #Nitral-AI/Kunocchini-7b-128k-test #128k #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NSK-7B-128k-slerp
NSK-7B-128k-slerp is a merge of the following models using mergekit:
* Nitral-AI/Nyan-Stunna-7B
* Nitral-AI/Kunocchini-7b-128k-test
## Configuration
| [
"# NSK-7B-128k-slerp\n\nNSK-7B-128k-slerp is a merge of the following models using mergekit:\n* Nitral-AI/Nyan-Stunna-7B\n* Nitral-AI/Kunocchini-7b-128k-test",
"## Configuration"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Nitral-AI/Nyan-Stunna-7B #Nitral-AI/Kunocchini-7b-128k-test #128k #conversational #custom_code #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NSK-7B-128k-slerp\n\nNSK-7B-128k-slerp is a merge of the following models using mergekit:\n* Nitral-AI/Nyan-Stunna-7B\n* Nitral-AI/Kunocchini-7b-128k-test",
"## Configuration"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 | {"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-7b", "model-index": [{"name": "gemma2b", "results": []}]} | iTia/gemma7b_sum | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-7b",
"license:gemma",
"region:us"
] | null | 2024-04-16T14:25:32+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-7b #license-gemma #region-us
|
# gemma7b
This model is a fine-tuned version of google/gemma-7b on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 | [
"# gemma7b\n\nThis model is a fine-tuned version of google/gemma-7b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-7b #license-gemma #region-us \n",
"# gemma7b\n\nThis model is a fine-tuned version of google/gemma-7b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2"
] |
text-classification | setfit |
# SetFit Aspect Model with cointegrated/rubert-tiny2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** ru_core_news_lg
- **SetFitABSA Aspect Model:** [isolation-forest/setfit-absa-aspect](https://huggingface.co/isolation-forest/setfit-absa-aspect)
- **SetFitABSA Polarity Model:** [isolation-forest/setfit-absa-polarity](https://huggingface.co/isolation-forest/setfit-absa-polarity)
- **Maximum Sequence Length:** 2048 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'порции:И порции " достойные " .'</li><li>'официантка:Потом официантка как будто пропала , было не дозваться , чтобы что - то дозаказать , очень долго приходилось ждать , в итоге посчитали неправильно , в счет внесли на 2 пункта больше , чем мы заказывали .'</li><li>'Обслуживание:Обслуживание не впечатлило .'</li></ul> |
| no aspect | <ul><li>'итоге:Потом официантка как будто пропала , было не дозваться , чтобы что - то дозаказать , очень долго приходилось ждать , в итоге посчитали неправильно , в счет внесли на 2 пункта больше , чем мы заказывали .'</li><li>'счет:Потом официантка как будто пропала , было не дозваться , чтобы что - то дозаказать , очень долго приходилось ждать , в итоге посчитали неправильно , в счет внесли на 2 пункта больше , чем мы заказывали .'</li><li>'пункта:Потом официантка как будто пропала , было не дозваться , чтобы что - то дозаказать , очень долго приходилось ждать , в итоге посчитали неправильно , в счет внесли на 2 пункта больше , чем мы заказывали .'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"isolation-forest/setfit-absa-aspect",
"isolation-forest/setfit-absa-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 31.9677 | 88 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 797 |
| aspect | 256 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.25 | - |
| 0.0011 | 50 | 0.1976 | - |
| 0.0023 | 100 | 0.2289 | - |
| 0.0034 | 150 | 0.2826 | - |
| 0.0046 | 200 | 0.2361 | - |
| 0.0057 | 250 | 0.2766 | - |
| 0.0068 | 300 | 0.2723 | - |
| 0.0080 | 350 | 0.2402 | - |
| 0.0091 | 400 | 0.2678 | - |
| 0.0103 | 450 | 0.2511 | - |
| 0.0114 | 500 | 0.21 | - |
| 0.0125 | 550 | 0.2503 | - |
| 0.0137 | 600 | 0.2614 | - |
| 0.0148 | 650 | 0.218 | - |
| 0.0160 | 700 | 0.2482 | - |
| 0.0171 | 750 | 0.2091 | - |
| 0.0182 | 800 | 0.2477 | - |
| 0.0194 | 850 | 0.2531 | - |
| 0.0205 | 900 | 0.1878 | - |
| 0.0217 | 950 | 0.2416 | - |
| 0.0228 | 1000 | 0.2245 | - |
| 0.0239 | 1050 | 0.2367 | - |
| 0.0251 | 1100 | 0.2376 | - |
| 0.0262 | 1150 | 0.2271 | - |
| 0.0274 | 1200 | 0.228 | - |
| 0.0285 | 1250 | 0.2362 | - |
| 0.0296 | 1300 | 0.2308 | - |
| 0.0308 | 1350 | 0.2326 | - |
| 0.0319 | 1400 | 0.2535 | - |
| 0.0331 | 1450 | 0.177 | - |
| 0.0342 | 1500 | 0.2595 | - |
| 0.0353 | 1550 | 0.2289 | - |
| 0.0365 | 1600 | 0.2378 | - |
| 0.0376 | 1650 | 0.2111 | - |
| 0.0388 | 1700 | 0.2556 | - |
| 0.0399 | 1750 | 0.2054 | - |
| 0.0410 | 1800 | 0.1949 | - |
| 0.0422 | 1850 | 0.2065 | - |
| 0.0433 | 1900 | 0.1907 | - |
| 0.0445 | 1950 | 0.2325 | - |
| 0.0456 | 2000 | 0.2313 | - |
| 0.0467 | 2050 | 0.1713 | - |
| 0.0479 | 2100 | 0.1786 | - |
| 0.0490 | 2150 | 0.2258 | - |
| 0.0502 | 2200 | 0.1102 | - |
| 0.0513 | 2250 | 0.1714 | - |
| 0.0524 | 2300 | 0.2325 | - |
| 0.0536 | 2350 | 0.2287 | - |
| 0.0547 | 2400 | 0.2901 | - |
| 0.0559 | 2450 | 0.1763 | - |
| 0.0570 | 2500 | 0.223 | - |
| 0.0581 | 2550 | 0.0784 | - |
| 0.0593 | 2600 | 0.2069 | - |
| 0.0604 | 2650 | 0.1353 | - |
| 0.0616 | 2700 | 0.1729 | - |
| 0.0627 | 2750 | 0.1753 | - |
| 0.0638 | 2800 | 0.2243 | - |
| 0.0650 | 2850 | 0.1151 | - |
| 0.0661 | 2900 | 0.2547 | - |
| 0.0673 | 2950 | 0.1414 | - |
| 0.0684 | 3000 | 0.1771 | - |
| 0.0695 | 3050 | 0.1275 | - |
| 0.0707 | 3100 | 0.0541 | - |
| 0.0718 | 3150 | 0.0962 | - |
| 0.0730 | 3200 | 0.1953 | - |
| 0.0741 | 3250 | 0.0787 | - |
| 0.0752 | 3300 | 0.0766 | - |
| 0.0764 | 3350 | 0.1537 | - |
| 0.0775 | 3400 | 0.0957 | - |
| 0.0787 | 3450 | 0.0975 | - |
| 0.0798 | 3500 | 0.0359 | - |
| 0.0809 | 3550 | 0.0402 | - |
| 0.0821 | 3600 | 0.0377 | - |
| 0.0832 | 3650 | 0.0486 | - |
| 0.0844 | 3700 | 0.1206 | - |
| 0.0855 | 3750 | 0.0504 | - |
| 0.0866 | 3800 | 0.1072 | - |
| 0.0878 | 3850 | 0.0838 | - |
| 0.0889 | 3900 | 0.1222 | - |
| 0.0901 | 3950 | 0.0463 | - |
| 0.0912 | 4000 | 0.0781 | - |
| 0.0923 | 4050 | 0.031 | - |
| 0.0935 | 4100 | 0.1063 | - |
| 0.0946 | 4150 | 0.0643 | - |
| 0.0958 | 4200 | 0.0624 | - |
| 0.0969 | 4250 | 0.0283 | - |
| 0.0980 | 4300 | 0.0527 | - |
| 0.0992 | 4350 | 0.0153 | - |
| 0.1003 | 4400 | 0.0765 | - |
| 0.1015 | 4450 | 0.0245 | - |
| 0.1026 | 4500 | 0.0494 | - |
| 0.1037 | 4550 | 0.0218 | - |
| 0.1049 | 4600 | 0.0086 | - |
| 0.1060 | 4650 | 0.0245 | - |
| 0.1072 | 4700 | 0.0047 | - |
| 0.1083 | 4750 | 0.0284 | - |
| 0.1094 | 4800 | 0.0045 | - |
| 0.1106 | 4850 | 0.0683 | - |
| 0.1117 | 4900 | 0.0234 | - |
| 0.1129 | 4950 | 0.0584 | - |
| 0.1140 | 5000 | 0.1212 | - |
| 0.1151 | 5050 | 0.0052 | - |
| 0.1163 | 5100 | 0.065 | - |
| 0.1174 | 5150 | 0.003 | - |
| 0.1186 | 5200 | 0.0937 | - |
| 0.1197 | 5250 | 0.0038 | - |
| 0.1208 | 5300 | 0.0061 | - |
| 0.1220 | 5350 | 0.0038 | - |
| 0.1231 | 5400 | 0.0674 | - |
| 0.1243 | 5450 | 0.0039 | - |
| 0.1254 | 5500 | 0.0088 | - |
| 0.1265 | 5550 | 0.0028 | - |
| 0.1277 | 5600 | 0.0031 | - |
| 0.1288 | 5650 | 0.0035 | - |
| 0.1300 | 5700 | 0.0545 | - |
| 0.1311 | 5750 | 0.0021 | - |
| 0.1322 | 5800 | 0.0056 | - |
| 0.1334 | 5850 | 0.0019 | - |
| 0.1345 | 5900 | 0.0023 | - |
| 0.1356 | 5950 | 0.0595 | - |
| 0.1368 | 6000 | 0.0019 | - |
| 0.1379 | 6050 | 0.0031 | - |
| 0.1391 | 6100 | 0.0025 | - |
| 0.1402 | 6150 | 0.0026 | - |
| 0.1413 | 6200 | 0.0032 | - |
| 0.1425 | 6250 | 0.0019 | - |
| 0.1436 | 6300 | 0.0761 | - |
| 0.1448 | 6350 | 0.0446 | - |
| 0.1459 | 6400 | 0.002 | - |
| 0.1470 | 6450 | 0.008 | - |
| 0.1482 | 6500 | 0.0044 | - |
| 0.1493 | 6550 | 0.0024 | - |
| 0.1505 | 6600 | 0.0026 | - |
| 0.1516 | 6650 | 0.0477 | - |
| 0.1527 | 6700 | 0.0023 | - |
| 0.1539 | 6750 | 0.0024 | - |
| 0.1550 | 6800 | 0.0016 | - |
| 0.1562 | 6850 | 0.0023 | - |
| 0.1573 | 6900 | 0.0017 | - |
| 0.1584 | 6950 | 0.0026 | - |
| 0.1596 | 7000 | 0.0602 | - |
| 0.1607 | 7050 | 0.002 | - |
| 0.1619 | 7100 | 0.0014 | - |
| 0.1630 | 7150 | 0.0019 | - |
| 0.1641 | 7200 | 0.0019 | - |
| 0.1653 | 7250 | 0.0021 | - |
| 0.1664 | 7300 | 0.0563 | - |
| 0.1676 | 7350 | 0.0017 | - |
| 0.1687 | 7400 | 0.0019 | - |
| 0.1698 | 7450 | 0.0017 | - |
| 0.1710 | 7500 | 0.0014 | - |
| 0.1721 | 7550 | 0.002 | - |
| 0.1733 | 7600 | 0.0028 | - |
| 0.1744 | 7650 | 0.002 | - |
| 0.1755 | 7700 | 0.0021 | - |
| 0.1767 | 7750 | 0.002 | - |
| 0.1778 | 7800 | 0.0017 | - |
| 0.1790 | 7850 | 0.0579 | - |
| 0.1801 | 7900 | 0.0089 | - |
| 0.1812 | 7950 | 0.0016 | - |
| 0.1824 | 8000 | 0.104 | - |
| 0.1835 | 8050 | 0.0241 | - |
| 0.1847 | 8100 | 0.0015 | - |
| 0.1858 | 8150 | 0.0039 | - |
| 0.1869 | 8200 | 0.0018 | - |
| 0.1881 | 8250 | 0.0018 | - |
| 0.1892 | 8300 | 0.0012 | - |
| 0.1904 | 8350 | 0.0015 | - |
| 0.1915 | 8400 | 0.0016 | - |
| 0.1926 | 8450 | 0.0017 | - |
| 0.1938 | 8500 | 0.0647 | - |
| 0.1949 | 8550 | 0.0013 | - |
| 0.1961 | 8600 | 0.0014 | - |
| 0.1972 | 8650 | 0.1705 | - |
| 0.1983 | 8700 | 0.0036 | - |
| 0.1995 | 8750 | 0.0014 | - |
| 0.2006 | 8800 | 0.0021 | - |
| 0.2018 | 8850 | 0.0019 | - |
| 0.2029 | 8900 | 0.0018 | - |
| 0.2040 | 8950 | 0.0018 | - |
| 0.2052 | 9000 | 0.001 | - |
| 0.2063 | 9050 | 0.0012 | - |
| 0.2075 | 9100 | 0.0013 | - |
| 0.2086 | 9150 | 0.0014 | - |
| 0.2097 | 9200 | 0.0609 | - |
| 0.2109 | 9250 | 0.0026 | - |
| 0.2120 | 9300 | 0.0012 | - |
| 0.2132 | 9350 | 0.0023 | - |
| 0.2143 | 9400 | 0.0043 | - |
| 0.2154 | 9450 | 0.0511 | - |
| 0.2166 | 9500 | 0.0012 | - |
| 0.2177 | 9550 | 0.002 | - |
| 0.2189 | 9600 | 0.0016 | - |
| 0.2200 | 9650 | 0.0124 | - |
| 0.2211 | 9700 | 0.0046 | - |
| 0.2223 | 9750 | 0.0012 | - |
| 0.2234 | 9800 | 0.0014 | - |
| 0.2246 | 9850 | 0.0016 | - |
| 0.2257 | 9900 | 0.0596 | - |
| 0.2268 | 9950 | 0.0013 | - |
| 0.2280 | 10000 | 0.0021 | - |
| 0.2291 | 10050 | 0.0012 | - |
| 0.2303 | 10100 | 0.057 | - |
| 0.2314 | 10150 | 0.0028 | - |
| 0.2325 | 10200 | 0.0014 | - |
| 0.2337 | 10250 | 0.0014 | - |
| 0.2348 | 10300 | 0.0019 | - |
| 0.2360 | 10350 | 0.0014 | - |
| 0.2371 | 10400 | 0.0015 | - |
| 0.2382 | 10450 | 0.0569 | - |
| 0.2394 | 10500 | 0.0012 | - |
| 0.2405 | 10550 | 0.0023 | - |
| 0.2417 | 10600 | 0.0013 | - |
| 0.2428 | 10650 | 0.0011 | - |
| 0.2439 | 10700 | 0.0191 | - |
| 0.2451 | 10750 | 0.0015 | - |
| 0.2462 | 10800 | 0.0022 | - |
| 0.2474 | 10850 | 0.0547 | - |
| 0.2485 | 10900 | 0.003 | - |
| 0.2496 | 10950 | 0.0013 | - |
| 0.2508 | 11000 | 0.0018 | - |
| 0.2519 | 11050 | 0.0016 | - |
| 0.2531 | 11100 | 0.0013 | - |
| 0.2542 | 11150 | 0.0019 | - |
| 0.2553 | 11200 | 0.0011 | - |
| 0.2565 | 11250 | 0.0555 | - |
| 0.2576 | 11300 | 0.0012 | - |
| 0.2588 | 11350 | 0.0016 | - |
| 0.2599 | 11400 | 0.004 | - |
| 0.2610 | 11450 | 0.0014 | - |
| 0.2622 | 11500 | 0.0016 | - |
| 0.2633 | 11550 | 0.0037 | - |
| 0.2645 | 11600 | 0.0014 | - |
| 0.2656 | 11650 | 0.0252 | - |
| 0.2667 | 11700 | 0.0011 | - |
| 0.2679 | 11750 | 0.0013 | - |
| 0.2690 | 11800 | 0.0552 | - |
| 0.2702 | 11850 | 0.0019 | - |
| 0.2713 | 11900 | 0.0009 | - |
| 0.2724 | 11950 | 0.0015 | - |
| 0.2736 | 12000 | 0.0362 | - |
| 0.2747 | 12050 | 0.001 | - |
| 0.2759 | 12100 | 0.0022 | - |
| 0.2770 | 12150 | 0.0013 | - |
| 0.2781 | 12200 | 0.0013 | - |
| 0.2793 | 12250 | 0.001 | - |
| 0.2804 | 12300 | 0.0027 | - |
| 0.2816 | 12350 | 0.0013 | - |
| 0.2827 | 12400 | 0.0014 | - |
| 0.2838 | 12450 | 0.001 | - |
| 0.2850 | 12500 | 0.0014 | - |
| 0.2861 | 12550 | 0.0014 | - |
| 0.2873 | 12600 | 0.0407 | - |
| 0.2884 | 12650 | 0.0009 | - |
| 0.2895 | 12700 | 0.0014 | - |
| 0.2907 | 12750 | 0.001 | - |
| 0.2918 | 12800 | 0.0011 | - |
| 0.2930 | 12850 | 0.0012 | - |
| 0.2941 | 12900 | 0.0011 | - |
| 0.2952 | 12950 | 0.0016 | - |
| 0.2964 | 13000 | 0.0012 | - |
| 0.2975 | 13050 | 0.001 | - |
| 0.2987 | 13100 | 0.0026 | - |
| 0.2998 | 13150 | 0.0015 | - |
| 0.3009 | 13200 | 0.0022 | - |
| 0.3021 | 13250 | 0.0007 | - |
| 0.3032 | 13300 | 0.001 | - |
| 0.3044 | 13350 | 0.0012 | - |
| 0.3055 | 13400 | 0.0019 | - |
| 0.3066 | 13450 | 0.0016 | - |
| 0.3078 | 13500 | 0.0938 | - |
| 0.3089 | 13550 | 0.0009 | - |
| 0.3101 | 13600 | 0.0016 | - |
| 0.3112 | 13650 | 0.0014 | - |
| 0.3123 | 13700 | 0.032 | - |
| 0.3135 | 13750 | 0.0013 | - |
| 0.3146 | 13800 | 0.0219 | - |
| 0.3158 | 13850 | 0.0012 | - |
| 0.3169 | 13900 | 0.0012 | - |
| 0.3180 | 13950 | 0.0214 | - |
| 0.3192 | 14000 | 0.001 | - |
| 0.3203 | 14050 | 0.0033 | - |
| 0.3215 | 14100 | 0.0009 | - |
| 0.3226 | 14150 | 0.001 | - |
| 0.3237 | 14200 | 0.001 | - |
| 0.3249 | 14250 | 0.0014 | - |
| 0.3260 | 14300 | 0.0075 | - |
| 0.3272 | 14350 | 0.0015 | - |
| 0.3283 | 14400 | 0.0018 | - |
| 0.3294 | 14450 | 0.0011 | - |
| 0.3306 | 14500 | 0.0008 | - |
| 0.3317 | 14550 | 0.0381 | - |
| 0.3329 | 14600 | 0.0007 | - |
| 0.3340 | 14650 | 0.0009 | - |
| 0.3351 | 14700 | 0.001 | - |
| 0.3363 | 14750 | 0.0011 | - |
| 0.3374 | 14800 | 0.0304 | - |
| 0.3386 | 14850 | 0.0008 | - |
| 0.3397 | 14900 | 0.0007 | - |
| 0.3408 | 14950 | 0.0013 | - |
| 0.3420 | 15000 | 0.0135 | - |
| 0.3431 | 15050 | 0.001 | - |
| 0.3443 | 15100 | 0.0007 | - |
| 0.3454 | 15150 | 0.0008 | - |
| 0.3465 | 15200 | 0.0018 | - |
| 0.3477 | 15250 | 0.0009 | - |
| 0.3488 | 15300 | 0.0013 | - |
| 0.3500 | 15350 | 0.0018 | - |
| 0.3511 | 15400 | 0.0014 | - |
| 0.3522 | 15450 | 0.0051 | - |
| 0.3534 | 15500 | 0.0009 | - |
| 0.3545 | 15550 | 0.0007 | - |
| 0.3557 | 15600 | 0.0006 | - |
| 0.3568 | 15650 | 0.001 | - |
| 0.3579 | 15700 | 0.001 | - |
| 0.3591 | 15750 | 0.0015 | - |
| 0.3602 | 15800 | 0.0006 | - |
| 0.3614 | 15850 | 0.0005 | - |
| 0.3625 | 15900 | 0.0009 | - |
| 0.3636 | 15950 | 0.0052 | - |
| 0.3648 | 16000 | 0.0006 | - |
| 0.3659 | 16050 | 0.0013 | - |
| 0.3671 | 16100 | 0.001 | - |
| 0.3682 | 16150 | 0.0007 | - |
| 0.3693 | 16200 | 0.001 | - |
| 0.3705 | 16250 | 0.0008 | - |
| 0.3716 | 16300 | 0.0006 | - |
| 0.3728 | 16350 | 0.0026 | - |
| 0.3739 | 16400 | 0.0012 | - |
| 0.3750 | 16450 | 0.0008 | - |
| 0.3762 | 16500 | 0.0008 | - |
| 0.3773 | 16550 | 0.001 | - |
| 0.3785 | 16600 | 0.0289 | - |
| 0.3796 | 16650 | 0.0012 | - |
| 0.3807 | 16700 | 0.0007 | - |
| 0.3819 | 16750 | 0.0009 | - |
| 0.3830 | 16800 | 0.0006 | - |
| 0.3842 | 16850 | 0.0007 | - |
| 0.3853 | 16900 | 0.0008 | - |
| 0.3864 | 16950 | 0.0007 | - |
| 0.3876 | 17000 | 0.0011 | - |
| 0.3887 | 17050 | 0.0032 | - |
| 0.3899 | 17100 | 0.0009 | - |
| 0.3910 | 17150 | 0.0007 | - |
| 0.3921 | 17200 | 0.0008 | - |
| 0.3933 | 17250 | 0.0008 | - |
| 0.3944 | 17300 | 0.0007 | - |
| 0.3955 | 17350 | 0.0012 | - |
| 0.3967 | 17400 | 0.0044 | - |
| 0.3978 | 17450 | 0.0006 | - |
| 0.3990 | 17500 | 0.0006 | - |
| 0.4001 | 17550 | 0.0006 | - |
| 0.4012 | 17600 | 0.002 | - |
| 0.4024 | 17650 | 0.0007 | - |
| 0.4035 | 17700 | 0.0005 | - |
| 0.4047 | 17750 | 0.0005 | - |
| 0.4058 | 17800 | 0.0005 | - |
| 0.4069 | 17850 | 0.0013 | - |
| 0.4081 | 17900 | 0.0004 | - |
| 0.4092 | 17950 | 0.0005 | - |
| 0.4104 | 18000 | 0.0007 | - |
| 0.4115 | 18050 | 0.0007 | - |
| 0.4126 | 18100 | 0.0007 | - |
| 0.4138 | 18150 | 0.0006 | - |
| 0.4149 | 18200 | 0.0004 | - |
| 0.4161 | 18250 | 0.0005 | - |
| 0.4172 | 18300 | 0.0307 | - |
| 0.4183 | 18350 | 0.001 | - |
| 0.4195 | 18400 | 0.0012 | - |
| 0.4206 | 18450 | 0.0007 | - |
| 0.4218 | 18500 | 0.0007 | - |
| 0.4229 | 18550 | 0.001 | - |
| 0.4240 | 18600 | 0.0006 | - |
| 0.4252 | 18650 | 0.0195 | - |
| 0.4263 | 18700 | 0.0583 | - |
| 0.4275 | 18750 | 0.0005 | - |
| 0.4286 | 18800 | 0.0011 | - |
| 0.4297 | 18850 | 0.0006 | - |
| 0.4309 | 18900 | 0.0007 | - |
| 0.4320 | 18950 | 0.0005 | - |
| 0.4332 | 19000 | 0.0005 | - |
| 0.4343 | 19050 | 0.0007 | - |
| 0.4354 | 19100 | 0.0008 | - |
| 0.4366 | 19150 | 0.0006 | - |
| 0.4377 | 19200 | 0.0007 | - |
| 0.4389 | 19250 | 0.0005 | - |
| 0.4400 | 19300 | 0.0004 | - |
| 0.4411 | 19350 | 0.0005 | - |
| 0.4423 | 19400 | 0.0006 | - |
| 0.4434 | 19450 | 0.0006 | - |
| 0.4446 | 19500 | 0.0005 | - |
| 0.4457 | 19550 | 0.0006 | - |
| 0.4468 | 19600 | 0.0005 | - |
| 0.4480 | 19650 | 0.0013 | - |
| 0.4491 | 19700 | 0.0006 | - |
| 0.4503 | 19750 | 0.0006 | - |
| 0.4514 | 19800 | 0.0095 | - |
| 0.4525 | 19850 | 0.0066 | - |
| 0.4537 | 19900 | 0.0005 | - |
| 0.4548 | 19950 | 0.0008 | - |
| 0.4560 | 20000 | 0.0006 | - |
| 0.4571 | 20050 | 0.0005 | - |
| 0.4582 | 20100 | 0.0004 | - |
| 0.4594 | 20150 | 0.0151 | - |
| 0.4605 | 20200 | 0.0004 | - |
| 0.4617 | 20250 | 0.001 | - |
| 0.4628 | 20300 | 0.0005 | - |
| 0.4639 | 20350 | 0.0007 | - |
| 0.4651 | 20400 | 0.0239 | - |
| 0.4662 | 20450 | 0.0009 | - |
| 0.4674 | 20500 | 0.0005 | - |
| 0.4685 | 20550 | 0.0008 | - |
| 0.4696 | 20600 | 0.0005 | - |
| 0.4708 | 20650 | 0.0006 | - |
| 0.4719 | 20700 | 0.0004 | - |
| 0.4731 | 20750 | 0.0005 | - |
| 0.4742 | 20800 | 0.0049 | - |
| 0.4753 | 20850 | 0.0007 | - |
| 0.4765 | 20900 | 0.0005 | - |
| 0.4776 | 20950 | 0.0018 | - |
| 0.4788 | 21000 | 0.0006 | - |
| 0.4799 | 21050 | 0.0008 | - |
| 0.4810 | 21100 | 0.0008 | - |
| 0.4822 | 21150 | 0.0225 | - |
| 0.4833 | 21200 | 0.0011 | - |
| 0.4845 | 21250 | 0.0005 | - |
| 0.4856 | 21300 | 0.0006 | - |
| 0.4867 | 21350 | 0.0004 | - |
| 0.4879 | 21400 | 0.0008 | - |
| 0.4890 | 21450 | 0.001 | - |
| 0.4902 | 21500 | 0.0004 | - |
| 0.4913 | 21550 | 0.0008 | - |
| 0.4924 | 21600 | 0.0008 | - |
| 0.4936 | 21650 | 0.0006 | - |
| 0.4947 | 21700 | 0.0008 | - |
| 0.4959 | 21750 | 0.0004 | - |
| 0.4970 | 21800 | 0.011 | - |
| 0.4981 | 21850 | 0.0007 | - |
| 0.4993 | 21900 | 0.0004 | - |
| 0.5004 | 21950 | 0.031 | - |
| 0.5016 | 22000 | 0.0102 | - |
| 0.5027 | 22050 | 0.0009 | - |
| 0.5038 | 22100 | 0.0089 | - |
| 0.5050 | 22150 | 0.0007 | - |
| 0.5061 | 22200 | 0.0006 | - |
| 0.5073 | 22250 | 0.0004 | - |
| 0.5084 | 22300 | 0.0004 | - |
| 0.5095 | 22350 | 0.0007 | - |
| 0.5107 | 22400 | 0.0004 | - |
| 0.5118 | 22450 | 0.0004 | - |
| 0.5130 | 22500 | 0.006 | - |
| 0.5141 | 22550 | 0.0008 | - |
| 0.5152 | 22600 | 0.0007 | - |
| 0.5164 | 22650 | 0.0007 | - |
| 0.5175 | 22700 | 0.0007 | - |
| 0.5187 | 22750 | 0.0003 | - |
| 0.5198 | 22800 | 0.0005 | - |
| 0.5209 | 22850 | 0.0006 | - |
| 0.5221 | 22900 | 0.0005 | - |
| 0.5232 | 22950 | 0.0324 | - |
| 0.5244 | 23000 | 0.017 | - |
| 0.5255 | 23050 | 0.0126 | - |
| 0.5266 | 23100 | 0.0005 | - |
| 0.5278 | 23150 | 0.0151 | - |
| 0.5289 | 23200 | 0.0005 | - |
| 0.5301 | 23250 | 0.0003 | - |
| 0.5312 | 23300 | 0.0004 | - |
| 0.5323 | 23350 | 0.011 | - |
| 0.5335 | 23400 | 0.0003 | - |
| 0.5346 | 23450 | 0.0006 | - |
| 0.5358 | 23500 | 0.0006 | - |
| 0.5369 | 23550 | 0.0007 | - |
| 0.5380 | 23600 | 0.0031 | - |
| 0.5392 | 23650 | 0.0005 | - |
| 0.5403 | 23700 | 0.0003 | - |
| 0.5415 | 23750 | 0.0003 | - |
| 0.5426 | 23800 | 0.0341 | - |
| 0.5437 | 23850 | 0.0004 | - |
| 0.5449 | 23900 | 0.0008 | - |
| 0.5460 | 23950 | 0.0008 | - |
| 0.5472 | 24000 | 0.0005 | - |
| 0.5483 | 24050 | 0.0003 | - |
| 0.5494 | 24100 | 0.0006 | - |
| 0.5506 | 24150 | 0.0007 | - |
| 0.5517 | 24200 | 0.001 | - |
| 0.5529 | 24250 | 0.0004 | - |
| 0.5540 | 24300 | 0.0004 | - |
| 0.5551 | 24350 | 0.0005 | - |
| 0.5563 | 24400 | 0.0004 | - |
| 0.5574 | 24450 | 0.0003 | - |
| 0.5586 | 24500 | 0.0007 | - |
| 0.5597 | 24550 | 0.0004 | - |
| 0.5608 | 24600 | 0.0005 | - |
| 0.5620 | 24650 | 0.0004 | - |
| 0.5631 | 24700 | 0.0004 | - |
| 0.5643 | 24750 | 0.0005 | - |
| 0.5654 | 24800 | 0.0008 | - |
| 0.5665 | 24850 | 0.0006 | - |
| 0.5677 | 24900 | 0.0006 | - |
| 0.5688 | 24950 | 0.0003 | - |
| 0.5700 | 25000 | 0.0005 | - |
| 0.5711 | 25050 | 0.0007 | - |
| 0.5722 | 25100 | 0.0004 | - |
| 0.5734 | 25150 | 0.0004 | - |
| 0.5745 | 25200 | 0.0303 | - |
| 0.5757 | 25250 | 0.0223 | - |
| 0.5768 | 25300 | 0.0008 | - |
| 0.5779 | 25350 | 0.0254 | - |
| 0.5791 | 25400 | 0.0006 | - |
| 0.5802 | 25450 | 0.0004 | - |
| 0.5814 | 25500 | 0.0003 | - |
| 0.5825 | 25550 | 0.0007 | - |
| 0.5836 | 25600 | 0.0425 | - |
| 0.5848 | 25650 | 0.0354 | - |
| 0.5859 | 25700 | 0.0006 | - |
| 0.5871 | 25750 | 0.0146 | - |
| 0.5882 | 25800 | 0.0006 | - |
| 0.5893 | 25850 | 0.0005 | - |
| 0.5905 | 25900 | 0.0004 | - |
| 0.5916 | 25950 | 0.0004 | - |
| 0.5928 | 26000 | 0.0007 | - |
| 0.5939 | 26050 | 0.0008 | - |
| 0.5950 | 26100 | 0.001 | - |
| 0.5962 | 26150 | 0.0003 | - |
| 0.5973 | 26200 | 0.0006 | - |
| 0.5985 | 26250 | 0.0003 | - |
| 0.5996 | 26300 | 0.0006 | - |
| 0.6007 | 26350 | 0.0007 | - |
| 0.6019 | 26400 | 0.0007 | - |
| 0.6030 | 26450 | 0.0287 | - |
| 0.6042 | 26500 | 0.0003 | - |
| 0.6053 | 26550 | 0.0323 | - |
| 0.6064 | 26600 | 0.0007 | - |
| 0.6076 | 26650 | 0.0002 | - |
| 0.6087 | 26700 | 0.0004 | - |
| 0.6099 | 26750 | 0.0003 | - |
| 0.6110 | 26800 | 0.0041 | - |
| 0.6121 | 26850 | 0.0005 | - |
| 0.6133 | 26900 | 0.0003 | - |
| 0.6144 | 26950 | 0.0003 | - |
| 0.6156 | 27000 | 0.0003 | - |
| 0.6167 | 27050 | 0.0007 | - |
| 0.6178 | 27100 | 0.0003 | - |
| 0.6190 | 27150 | 0.0003 | - |
| 0.6201 | 27200 | 0.0008 | - |
| 0.6213 | 27250 | 0.0004 | - |
| 0.6224 | 27300 | 0.0003 | - |
| 0.6235 | 27350 | 0.0003 | - |
| 0.6247 | 27400 | 0.0007 | - |
| 0.6258 | 27450 | 0.0006 | - |
| 0.6270 | 27500 | 0.0008 | - |
| 0.6281 | 27550 | 0.0004 | - |
| 0.6292 | 27600 | 0.0004 | - |
| 0.6304 | 27650 | 0.0006 | - |
| 0.6315 | 27700 | 0.0004 | - |
| 0.6327 | 27750 | 0.0003 | - |
| 0.6338 | 27800 | 0.0008 | - |
| 0.6349 | 27850 | 0.0005 | - |
| 0.6361 | 27900 | 0.0121 | - |
| 0.6372 | 27950 | 0.0006 | - |
| 0.6384 | 28000 | 0.0004 | - |
| 0.6395 | 28050 | 0.001 | - |
| 0.6406 | 28100 | 0.0002 | - |
| 0.6418 | 28150 | 0.0006 | - |
| 0.6429 | 28200 | 0.0004 | - |
| 0.6441 | 28250 | 0.0005 | - |
| 0.6452 | 28300 | 0.0004 | - |
| 0.6463 | 28350 | 0.0006 | - |
| 0.6475 | 28400 | 0.001 | - |
| 0.6486 | 28450 | 0.0004 | - |
| 0.6498 | 28500 | 0.0337 | - |
| 0.6509 | 28550 | 0.0009 | - |
| 0.6520 | 28600 | 0.0003 | - |
| 0.6532 | 28650 | 0.0003 | - |
| 0.6543 | 28700 | 0.0005 | - |
| 0.6554 | 28750 | 0.0003 | - |
| 0.6566 | 28800 | 0.0008 | - |
| 0.6577 | 28850 | 0.0002 | - |
| 0.6589 | 28900 | 0.0038 | - |
| 0.6600 | 28950 | 0.0007 | - |
| 0.6611 | 29000 | 0.0003 | - |
| 0.6623 | 29050 | 0.0003 | - |
| 0.6634 | 29100 | 0.0003 | - |
| 0.6646 | 29150 | 0.0003 | - |
| 0.6657 | 29200 | 0.0422 | - |
| 0.6668 | 29250 | 0.0004 | - |
| 0.6680 | 29300 | 0.0002 | - |
| 0.6691 | 29350 | 0.0006 | - |
| 0.6703 | 29400 | 0.0006 | - |
| 0.6714 | 29450 | 0.0004 | - |
| 0.6725 | 29500 | 0.0004 | - |
| 0.6737 | 29550 | 0.0003 | - |
| 0.6748 | 29600 | 0.0004 | - |
| 0.6760 | 29650 | 0.0003 | - |
| 0.6771 | 29700 | 0.0008 | - |
| 0.6782 | 29750 | 0.0003 | - |
| 0.6794 | 29800 | 0.0005 | - |
| 0.6805 | 29850 | 0.0007 | - |
| 0.6817 | 29900 | 0.0004 | - |
| 0.6828 | 29950 | 0.0003 | - |
| 0.6839 | 30000 | 0.0002 | - |
| 0.6851 | 30050 | 0.0004 | - |
| 0.6862 | 30100 | 0.0005 | - |
| 0.6874 | 30150 | 0.0007 | - |
| 0.6885 | 30200 | 0.0005 | - |
| 0.6896 | 30250 | 0.0002 | - |
| 0.6908 | 30300 | 0.0004 | - |
| 0.6919 | 30350 | 0.0007 | - |
| 0.6931 | 30400 | 0.0012 | - |
| 0.6942 | 30450 | 0.0006 | - |
| 0.6953 | 30500 | 0.0006 | - |
| 0.6965 | 30550 | 0.0004 | - |
| 0.6976 | 30600 | 0.0004 | - |
| 0.6988 | 30650 | 0.0003 | - |
| 0.6999 | 30700 | 0.0005 | - |
| 0.7010 | 30750 | 0.0007 | - |
| 0.7022 | 30800 | 0.0003 | - |
| 0.7033 | 30850 | 0.0005 | - |
| 0.7045 | 30900 | 0.0003 | - |
| 0.7056 | 30950 | 0.0002 | - |
| 0.7067 | 31000 | 0.0002 | - |
| 0.7079 | 31050 | 0.0005 | - |
| 0.7090 | 31100 | 0.0003 | - |
| 0.7102 | 31150 | 0.0002 | - |
| 0.7113 | 31200 | 0.0006 | - |
| 0.7124 | 31250 | 0.0004 | - |
| 0.7136 | 31300 | 0.0003 | - |
| 0.7147 | 31350 | 0.0003 | - |
| 0.7159 | 31400 | 0.0002 | - |
| 0.7170 | 31450 | 0.0003 | - |
| 0.7181 | 31500 | 0.0002 | - |
| 0.7193 | 31550 | 0.0004 | - |
| 0.7204 | 31600 | 0.0006 | - |
| 0.7216 | 31650 | 0.0007 | - |
| 0.7227 | 31700 | 0.0004 | - |
| 0.7238 | 31750 | 0.0003 | - |
| 0.7250 | 31800 | 0.0002 | - |
| 0.7261 | 31850 | 0.0004 | - |
| 0.7273 | 31900 | 0.0006 | - |
| 0.7284 | 31950 | 0.0004 | - |
| 0.7295 | 32000 | 0.0005 | - |
| 0.7307 | 32050 | 0.0011 | - |
| 0.7318 | 32100 | 0.0003 | - |
| 0.7330 | 32150 | 0.0004 | - |
| 0.7341 | 32200 | 0.0551 | - |
| 0.7352 | 32250 | 0.0006 | - |
| 0.7364 | 32300 | 0.0004 | - |
| 0.7375 | 32350 | 0.0005 | - |
| 0.7387 | 32400 | 0.0004 | - |
| 0.7398 | 32450 | 0.0007 | - |
| 0.7409 | 32500 | 0.0003 | - |
| 0.7421 | 32550 | 0.0007 | - |
| 0.7432 | 32600 | 0.0003 | - |
| 0.7444 | 32650 | 0.0007 | - |
| 0.7455 | 32700 | 0.0006 | - |
| 0.7466 | 32750 | 0.0006 | - |
| 0.7478 | 32800 | 0.0003 | - |
| 0.7489 | 32850 | 0.0005 | - |
| 0.7501 | 32900 | 0.0004 | - |
| 0.7512 | 32950 | 0.0007 | - |
| 0.7523 | 33000 | 0.0002 | - |
| 0.7535 | 33050 | 0.0008 | - |
| 0.7546 | 33100 | 0.0004 | - |
| 0.7558 | 33150 | 0.0002 | - |
| 0.7569 | 33200 | 0.0006 | - |
| 0.7580 | 33250 | 0.0046 | - |
| 0.7592 | 33300 | 0.0005 | - |
| 0.7603 | 33350 | 0.0003 | - |
| 0.7615 | 33400 | 0.0125 | - |
| 0.7626 | 33450 | 0.0006 | - |
| 0.7637 | 33500 | 0.0063 | - |
| 0.7649 | 33550 | 0.0008 | - |
| 0.7660 | 33600 | 0.0004 | - |
| 0.7672 | 33650 | 0.0037 | - |
| 0.7683 | 33700 | 0.0005 | - |
| 0.7694 | 33750 | 0.0006 | - |
| 0.7706 | 33800 | 0.0006 | - |
| 0.7717 | 33850 | 0.012 | - |
| 0.7729 | 33900 | 0.0005 | - |
| 0.7740 | 33950 | 0.0005 | - |
| 0.7751 | 34000 | 0.0005 | - |
| 0.7763 | 34050 | 0.0003 | - |
| 0.7774 | 34100 | 0.0004 | - |
| 0.7786 | 34150 | 0.0003 | - |
| 0.7797 | 34200 | 0.0003 | - |
| 0.7808 | 34250 | 0.0088 | - |
| 0.7820 | 34300 | 0.0004 | - |
| 0.7831 | 34350 | 0.0002 | - |
| 0.7843 | 34400 | 0.0004 | - |
| 0.7854 | 34450 | 0.0082 | - |
| 0.7865 | 34500 | 0.0005 | - |
| 0.7877 | 34550 | 0.0005 | - |
| 0.7888 | 34600 | 0.0004 | - |
| 0.7900 | 34650 | 0.0003 | - |
| 0.7911 | 34700 | 0.0006 | - |
| 0.7922 | 34750 | 0.0006 | - |
| 0.7934 | 34800 | 0.0002 | - |
| 0.7945 | 34850 | 0.0003 | - |
| 0.7957 | 34900 | 0.0005 | - |
| 0.7968 | 34950 | 0.0003 | - |
| 0.7979 | 35000 | 0.0004 | - |
| 0.7991 | 35050 | 0.0003 | - |
| 0.8002 | 35100 | 0.0002 | - |
| 0.8014 | 35150 | 0.0094 | - |
| 0.8025 | 35200 | 0.0004 | - |
| 0.8036 | 35250 | 0.0004 | - |
| 0.8048 | 35300 | 0.0245 | - |
| 0.8059 | 35350 | 0.0006 | - |
| 0.8071 | 35400 | 0.0004 | - |
| 0.8082 | 35450 | 0.0004 | - |
| 0.8093 | 35500 | 0.0003 | - |
| 0.8105 | 35550 | 0.0007 | - |
| 0.8116 | 35600 | 0.0266 | - |
| 0.8128 | 35650 | 0.0005 | - |
| 0.8139 | 35700 | 0.0003 | - |
| 0.8150 | 35750 | 0.0092 | - |
| 0.8162 | 35800 | 0.0004 | - |
| 0.8173 | 35850 | 0.0002 | - |
| 0.8185 | 35900 | 0.0004 | - |
| 0.8196 | 35950 | 0.0003 | - |
| 0.8207 | 36000 | 0.0002 | - |
| 0.8219 | 36050 | 0.0003 | - |
| 0.8230 | 36100 | 0.0002 | - |
| 0.8242 | 36150 | 0.0006 | - |
| 0.8253 | 36200 | 0.0003 | - |
| 0.8264 | 36250 | 0.0002 | - |
| 0.8276 | 36300 | 0.0002 | - |
| 0.8287 | 36350 | 0.0002 | - |
| 0.8299 | 36400 | 0.0002 | - |
| 0.8310 | 36450 | 0.0004 | - |
| 0.8321 | 36500 | 0.001 | - |
| 0.8333 | 36550 | 0.0134 | - |
| 0.8344 | 36600 | 0.0007 | - |
| 0.8356 | 36650 | 0.0005 | - |
| 0.8367 | 36700 | 0.0004 | - |
| 0.8378 | 36750 | 0.0003 | - |
| 0.8390 | 36800 | 0.007 | - |
| 0.8401 | 36850 | 0.0002 | - |
| 0.8413 | 36900 | 0.0005 | - |
| 0.8424 | 36950 | 0.0002 | - |
| 0.8435 | 37000 | 0.0002 | - |
| 0.8447 | 37050 | 0.0003 | - |
| 0.8458 | 37100 | 0.0002 | - |
| 0.8470 | 37150 | 0.003 | - |
| 0.8481 | 37200 | 0.0003 | - |
| 0.8492 | 37250 | 0.0002 | - |
| 0.8504 | 37300 | 0.0011 | - |
| 0.8515 | 37350 | 0.0015 | - |
| 0.8527 | 37400 | 0.0002 | - |
| 0.8538 | 37450 | 0.0004 | - |
| 0.8549 | 37500 | 0.0005 | - |
| 0.8561 | 37550 | 0.0004 | - |
| 0.8572 | 37600 | 0.0085 | - |
| 0.8584 | 37650 | 0.0002 | - |
| 0.8595 | 37700 | 0.0003 | - |
| 0.8606 | 37750 | 0.0002 | - |
| 0.8618 | 37800 | 0.0002 | - |
| 0.8629 | 37850 | 0.0042 | - |
| 0.8641 | 37900 | 0.0006 | - |
| 0.8652 | 37950 | 0.0133 | - |
| 0.8663 | 38000 | 0.0003 | - |
| 0.8675 | 38050 | 0.0003 | - |
| 0.8686 | 38100 | 0.0003 | - |
| 0.8698 | 38150 | 0.0002 | - |
| 0.8709 | 38200 | 0.0359 | - |
| 0.8720 | 38250 | 0.001 | - |
| 0.8732 | 38300 | 0.0004 | - |
| 0.8743 | 38350 | 0.0002 | - |
| 0.8755 | 38400 | 0.0004 | - |
| 0.8766 | 38450 | 0.0005 | - |
| 0.8777 | 38500 | 0.0005 | - |
| 0.8789 | 38550 | 0.0003 | - |
| 0.8800 | 38600 | 0.0078 | - |
| 0.8812 | 38650 | 0.0002 | - |
| 0.8823 | 38700 | 0.0006 | - |
| 0.8834 | 38750 | 0.0002 | - |
| 0.8846 | 38800 | 0.0005 | - |
| 0.8857 | 38850 | 0.0355 | - |
| 0.8869 | 38900 | 0.0006 | - |
| 0.8880 | 38950 | 0.0003 | - |
| 0.8891 | 39000 | 0.0003 | - |
| 0.8903 | 39050 | 0.0002 | - |
| 0.8914 | 39100 | 0.0004 | - |
| 0.8926 | 39150 | 0.0002 | - |
| 0.8937 | 39200 | 0.0011 | - |
| 0.8948 | 39250 | 0.0003 | - |
| 0.8960 | 39300 | 0.0305 | - |
| 0.8971 | 39350 | 0.0002 | - |
| 0.8983 | 39400 | 0.0069 | - |
| 0.8994 | 39450 | 0.0002 | - |
| 0.9005 | 39500 | 0.0004 | - |
| 0.9017 | 39550 | 0.0003 | - |
| 0.9028 | 39600 | 0.0002 | - |
| 0.9040 | 39650 | 0.0002 | - |
| 0.9051 | 39700 | 0.0007 | - |
| 0.9062 | 39750 | 0.0002 | - |
| 0.9074 | 39800 | 0.0004 | - |
| 0.9085 | 39850 | 0.0008 | - |
| 0.9097 | 39900 | 0.0002 | - |
| 0.9108 | 39950 | 0.0004 | - |
| 0.9119 | 40000 | 0.0156 | - |
| 0.9131 | 40050 | 0.0007 | - |
| 0.9142 | 40100 | 0.0003 | - |
| 0.9154 | 40150 | 0.0006 | - |
| 0.9165 | 40200 | 0.0074 | - |
| 0.9176 | 40250 | 0.0075 | - |
| 0.9188 | 40300 | 0.0002 | - |
| 0.9199 | 40350 | 0.0006 | - |
| 0.9210 | 40400 | 0.0004 | - |
| 0.9222 | 40450 | 0.0004 | - |
| 0.9233 | 40500 | 0.0002 | - |
| 0.9245 | 40550 | 0.0008 | - |
| 0.9256 | 40600 | 0.0002 | - |
| 0.9267 | 40650 | 0.0003 | - |
| 0.9279 | 40700 | 0.0005 | - |
| 0.9290 | 40750 | 0.0104 | - |
| 0.9302 | 40800 | 0.0002 | - |
| 0.9313 | 40850 | 0.0003 | - |
| 0.9324 | 40900 | 0.0005 | - |
| 0.9336 | 40950 | 0.0003 | - |
| 0.9347 | 41000 | 0.0002 | - |
| 0.9359 | 41050 | 0.0002 | - |
| 0.9370 | 41100 | 0.0004 | - |
| 0.9381 | 41150 | 0.0285 | - |
| 0.9393 | 41200 | 0.0067 | - |
| 0.9404 | 41250 | 0.0004 | - |
| 0.9416 | 41300 | 0.0004 | - |
| 0.9427 | 41350 | 0.0002 | - |
| 0.9438 | 41400 | 0.0006 | - |
| 0.9450 | 41450 | 0.0003 | - |
| 0.9461 | 41500 | 0.0008 | - |
| 0.9473 | 41550 | 0.0004 | - |
| 0.9484 | 41600 | 0.0003 | - |
| 0.9495 | 41650 | 0.0005 | - |
| 0.9507 | 41700 | 0.0005 | - |
| 0.9518 | 41750 | 0.0002 | - |
| 0.9530 | 41800 | 0.0004 | - |
| 0.9541 | 41850 | 0.0003 | - |
| 0.9552 | 41900 | 0.0006 | - |
| 0.9564 | 41950 | 0.0006 | - |
| 0.9575 | 42000 | 0.0002 | - |
| 0.9587 | 42050 | 0.0002 | - |
| 0.9598 | 42100 | 0.0002 | - |
| 0.9609 | 42150 | 0.0068 | - |
| 0.9621 | 42200 | 0.007 | - |
| 0.9632 | 42250 | 0.0265 | - |
| 0.9644 | 42300 | 0.0004 | - |
| 0.9655 | 42350 | 0.0002 | - |
| 0.9666 | 42400 | 0.0005 | - |
| 0.9678 | 42450 | 0.0004 | - |
| 0.9689 | 42500 | 0.0063 | - |
| 0.9701 | 42550 | 0.0004 | - |
| 0.9712 | 42600 | 0.0002 | - |
| 0.9723 | 42650 | 0.0002 | - |
| 0.9735 | 42700 | 0.0003 | - |
| 0.9746 | 42750 | 0.0007 | - |
| 0.9758 | 42800 | 0.0004 | - |
| 0.9769 | 42850 | 0.0082 | - |
| 0.9780 | 42900 | 0.0004 | - |
| 0.9792 | 42950 | 0.031 | - |
| 0.9803 | 43000 | 0.0004 | - |
| 0.9815 | 43050 | 0.0047 | - |
| 0.9826 | 43100 | 0.0003 | - |
| 0.9837 | 43150 | 0.0003 | - |
| 0.9849 | 43200 | 0.0005 | - |
| 0.9860 | 43250 | 0.0003 | - |
| 0.9872 | 43300 | 0.0002 | - |
| 0.9883 | 43350 | 0.0005 | - |
| 0.9894 | 43400 | 0.0003 | - |
| 0.9906 | 43450 | 0.0007 | - |
| 0.9917 | 43500 | 0.0003 | - |
| 0.9929 | 43550 | 0.0003 | - |
| 0.9940 | 43600 | 0.0006 | - |
| 0.9951 | 43650 | 0.001 | - |
| 0.9963 | 43700 | 0.0006 | - |
| 0.9974 | 43750 | 0.0002 | - |
| 0.9986 | 43800 | 0.0003 | - |
| 0.9997 | 43850 | 0.0005 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.2
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "cointegrated/rubert-tiny2", "widget": [{"text": "\u043f\u043b\u044e\u0441\u043e\u0432 -:\u0415\u0449\u0435 \u0438\u0437 \u043f\u043b\u044e\u0441\u043e\u0432 - \u043f\u0440\u0438 \u0437\u0430\u043a\u0430\u0437\u0435 \u0431\u0430\u043d\u043a\u0435\u0442\u0430 \u0435\u0441\u0442\u044c \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\u0435 \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0435\u043d\u0438\u0435 \u043f\u043e \u0430\u043b\u043a\u043e\u0433\u043e\u043b\u044e ( \u043c\u043e\u0436\u043d\u043e \u043f\u0440\u0438\u043e\u0431\u0440\u0435\u0442\u0438 \u0432\u0438\u043d\u043e , \u0448\u0430\u043c\u043f\u0430\u043d\u0441\u043a\u043e\u0435 \u0438 \u0432\u043e\u0434\u043a\u0443 \u043f\u043e \u0446\u0435\u043d\u0430\u043c \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044f ) ."}, {"text": "\u0442\u0435\u043b\u044f\u0442\u0438\u043d\u044b:\u0417\u0430\u043a\u0430\u0437\u0430\u043b\u0430 \u044f \u0441\u0430\u043b\u0430\u0442 , \u0431\u043e\u043b\u044c\u0448\u0443\u044e \u043f\u043e\u0440\u0446\u0438\u044e , \u043a\u0430\u043a \u043d\u0438 \u0441\u0442\u0440\u0430\u043d\u043d\u043e \u0435\u0433\u043e \u043f\u0440\u0438\u043d\u0435\u0441\u043b\u0438 \u0432 \u0431\u043e\u043b\u044c\u0448\u043e\u043c \u043a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u0435 , \u0438\u0437 \u0433\u043e\u0440\u044f\u0447\u0435\u0433\u043e \u0437\u0430\u043a\u0430\u0437\u0430\u043b\u0430 \u0441\u0442\u0435\u0439\u043a \u0438\u0437 \u0442\u0435\u043b\u044f\u0442\u0438\u043d\u044b , \u043c\u044f\u0441\u043e \u0431\u044b\u043b\u043e \u043c\u044f\u0433\u043a\u0438\u043c \u0438 \u0441\u043e\u0447\u043d\u044b\u043c , \u0438 \u043a\u043e\u043d\u0435\u0447\u043d\u043e \u0436\u0435 \u043c\u043e\u0435 \u0441\u0430\u043c\u043e\u0435 \u043b\u044e\u0431\u0438\u043c\u043e\u0435 \u044d\u0442\u043e \u0434\u0435\u0441\u0435\u0440\u0442 , \u0437\u0430\u043a\u0430\u0437\u0430\u043b\u0430 \u0442\u0438\u0440\u0430\u043c\u0438\u0441\u0443 , \u0438 \u043f\u0440\u0430\u0432\u0438\u043b\u044c\u043d\u043e \u0441\u0434\u0435\u043b\u0430\u043b\u0430 , \u043e\u0447\u0435\u043d\u044c \u0432\u043a\u0443\u0441\u043d\u044b\u0439 \u0434\u0435\u0441\u0435\u0440\u0442 ."}, {"text": "\u0431\u0435\u043a\u043e\u043d\u0430:\u0421\u0430\u043b\u0430\u0442 \u0446\u0435\u0437\u0430\u0440\u044c \u0432\u0441\u044f \u0442\u0430\u0440\u0435\u043b\u043a\u0430 \u044d\u0442\u043e \u043b\u0438\u0441\u0442\u044c\u044f \u0441\u0430\u043b\u0430\u0442\u0430 , \u043d\u0435\u043c\u043d\u043e\u0433\u043e \u0431\u0435\u043a\u043e\u043d\u0430 \u043f\u043e \u043a\u0440\u0443\u0433\u0443 \u0438 \u0432\u0441\u0435 \u044d\u0442\u043e \u0432 \u0441\u043e\u0443\u0441\u0435 , \u0441\u0432\u0435\u0440\u0445\u0443 \u0441\u044b\u0440 ( \u0446\u0435\u0437\u0430\u0440\u044c \u0433\u043e\u0442\u043e\u0432\u0438\u0442\u0441\u044f \u0441 \u043a\u0443\u0440\u0438\u0446\u0435\u0439 ) ."}, {"text": "\u0440\u0435\u0441\u0442\u043e\u0440\u0430\u043d:\u041f\u043e \u043c\u043e\u0435\u0439 \u0440\u0435\u043a\u043e\u043c\u0435\u043d\u0434\u0430\u0446\u0438\u0438 \u044d\u0442\u043e\u0442 \u0440\u0435\u0441\u0442\u043e\u0440\u0430\u043d \u043f\u043e\u0441\u0435\u0442\u0438\u043b\u0438 \u043d\u0435\u0441\u043a\u043e\u043b\u044c\u043a\u043e \u043f\u0430\u0440 \u043c\u043e\u0438\u0445 \u0434\u0440\u0443\u0437\u0435\u0439 \u0438 \u0442\u0430\u043a\u0436\u0435 \u043e\u0441\u0442\u0430\u043b\u0438\u0441\u044c \u0434\u043e\u0432\u043e\u043b\u044c\u043d\u044b ."}, {"text": "\u0431\u043b\u044e\u0434\u0430:\u0414\u043b\u044f \u043c\u0435\u043d\u044f \u0436\u0435 \u043c\u0438\u043d\u0443\u0441 \u0431\u044b\u043b \u0432 \u0442\u043e\u043c , \u0447\u0442\u043e \u0441\u0430\u043c\u0438 \u0431\u043b\u044e\u0434\u0430 \u0441\u043b\u0438\u0448\u043a\u043e\u043c \u0441\u043f\u0435\u0446\u0438\u0444\u0438\u0447\u0435\u0441\u043a\u0438\u0435 ."}], "pipeline_tag": "text-classification", "inference": false} | isolation-forest/setfit-absa-aspect | null | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:cointegrated/rubert-tiny2",
"region:us"
] | null | 2024-04-16T14:27:26+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-cointegrated/rubert-tiny2 #region-us
| SetFit Aspect Model with cointegrated/rubert-tiny2
==================================================
This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses cointegrated/rubert-tiny2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use this SetFit model to filter these possible aspect span candidates.
3. Use a SetFit model to classify the filtered aspect span candidates.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: cointegrated/rubert-tiny2
* Classification head: a LogisticRegression instance
* spaCy Model: ru\_core\_news\_lg
* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect
* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity
* Maximum Sequence Length: 2048 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 2)
* num\_epochs: (1, 16)
* max\_steps: -1
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.13
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* spaCy: 3.7.2
* Transformers: 4.39.3
* PyTorch: 2.1.2
* Datasets: 2.18.0
* Tokenizers: 0.15.2
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: cointegrated/rubert-tiny2\n* Classification head: a LogisticRegression instance\n* spaCy Model: ru\\_core\\_news\\_lg\n* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect\n* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity\n* Maximum Sequence Length: 2048 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.2\n* Transformers: 4.39.3\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-cointegrated/rubert-tiny2 #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: cointegrated/rubert-tiny2\n* Classification head: a LogisticRegression instance\n* spaCy Model: ru\\_core\\_news\\_lg\n* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect\n* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity\n* Maximum Sequence Length: 2048 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.2\n* Transformers: 4.39.3\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] |
text-classification | setfit |
# SetFit Polarity Model with cointegrated/rubert-tiny2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** ru_core_news_lg
- **SetFitABSA Aspect Model:** [isolation-forest/setfit-absa-aspect](https://huggingface.co/isolation-forest/setfit-absa-aspect)
- **SetFitABSA Polarity Model:** [isolation-forest/setfit-absa-polarity](https://huggingface.co/isolation-forest/setfit-absa-polarity)
- **Maximum Sequence Length:** 2048 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Positive | <ul><li>'И порции " достойные ":И порции " достойные " .'</li><li>'Салаты вообще оказались вкуснейшими:Салаты вообще оказались вкуснейшими .'</li><li>'порадовала , большая пивная тарелка , действительно оказалась:Кухня порадовала , большая пивная тарелка , действительно оказалась большой и вкусной !'</li></ul> |
| Negative | <ul><li>'Потом официантка как будто пропала:Потом официантка как будто пропала , было не дозваться , чтобы что - то дозаказать , очень долго приходилось ждать , в итоге посчитали неправильно , в счет внесли на 2 пункта больше , чем мы заказывали .'</li><li>'Обслуживание не впечатлило .:Обслуживание не впечатлило .'</li><li>'приятно удивлена " китайским интерьером " - диванчики:Была приятно удивлена " китайским интерьером " - диванчики как в бистро , скатерти на столах по типу а - ля столовая , европейские светильники / люстры , в общем в плане интерьера китайского никакого абсолютно !'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"isolation-forest/setfit-absa-aspect",
"isolation-forest/setfit-absa-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 28.4766 | 92 |
| Label | Training Sample Count |
|:---------|:----------------------|
| Negative | 128 |
| Positive | 128 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0005 | 1 | 0.2196 | - |
| 0.0242 | 50 | 0.2339 | - |
| 0.0484 | 100 | 0.2258 | - |
| 0.0727 | 150 | 0.246 | - |
| 0.0969 | 200 | 0.1963 | - |
| 0.1211 | 250 | 0.18 | - |
| 0.1453 | 300 | 0.1176 | - |
| 0.1696 | 350 | 0.0588 | - |
| 0.1938 | 400 | 0.0482 | - |
| 0.2180 | 450 | 0.1131 | - |
| 0.2422 | 500 | 0.0134 | - |
| 0.2665 | 550 | 0.0415 | - |
| 0.2907 | 600 | 0.0144 | - |
| 0.3149 | 650 | 0.012 | - |
| 0.3391 | 700 | 0.0091 | - |
| 0.3634 | 750 | 0.0055 | - |
| 0.3876 | 800 | 0.0054 | - |
| 0.4118 | 850 | 0.0055 | - |
| 0.4360 | 900 | 0.0072 | - |
| 0.4603 | 950 | 0.0094 | - |
| 0.4845 | 1000 | 0.0054 | - |
| 0.5087 | 1050 | 0.0045 | - |
| 0.5329 | 1100 | 0.003 | - |
| 0.5572 | 1150 | 0.0067 | - |
| 0.5814 | 1200 | 0.0041 | - |
| 0.6056 | 1250 | 0.0048 | - |
| 0.6298 | 1300 | 0.0053 | - |
| 0.6541 | 1350 | 0.0048 | - |
| 0.6783 | 1400 | 0.0038 | - |
| 0.7025 | 1450 | 0.0037 | - |
| 0.7267 | 1500 | 0.0031 | - |
| 0.7510 | 1550 | 0.0038 | - |
| 0.7752 | 1600 | 0.0032 | - |
| 0.7994 | 1650 | 0.0039 | - |
| 0.8236 | 1700 | 0.0032 | - |
| 0.8479 | 1750 | 0.0023 | - |
| 0.8721 | 1800 | 0.0029 | - |
| 0.8963 | 1850 | 0.0041 | - |
| 0.9205 | 1900 | 0.0026 | - |
| 0.9448 | 1950 | 0.0027 | - |
| 0.9690 | 2000 | 0.0035 | - |
| 0.9932 | 2050 | 0.003 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.2
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "cointegrated/rubert-tiny2", "widget": [{"text": "\u0430 \u043b \u0430 \u043f\u0430\u043b\u0442\u0443\u0441\u0430 \u0437\u0430\u043f\u0435\u0447\u0435\u043d\u043d\u043e\u0433\u043e \u2013 \u0445 \u043e:\u041f \u043e \u043f \u0440 \u043e \u0431 \u043e \u0432 \u0430 \u043b \u0430 \u043f\u0430\u043b\u0442\u0443\u0441\u0430 \u0437\u0430\u043f\u0435\u0447\u0435\u043d\u043d\u043e\u0433\u043e \u2013 \u0445 \u043e \u0440 \u043e \u0448 , \u0434 \u0430 \u0438 \u043a \u0440 \u0430 \u0441 \u0438 \u0432 \u043e \u0441 \u043c \u043e \u0442 \u0440 \u0438 \u0442 \u0441 \u044f \u043d \u0430 \u0442 \u0430 \u0440 \u0435 \u043b \u043a \u0435 ."}, {"text": "\u0441 \u043a\u0443\u0440\u0438\u0446\u0435\u0439 , \u043b\u043e\u0441\u043e\u0441\u044c \u0441\u043e \u0448\u043f\u0438\u043d\u0430\u0442\u043d\u044b\u043c \u0441\u043e\u0443\u0441\u043e\u043c , \u0447\u0430\u0439 \u043e\u0431\u043b\u0435\u043f\u0438\u0445\u043e\u0432\u044b\u0439:\u041f\u0440\u0438 \u043a\u0430\u0436\u0434\u043e\u043c \u043d\u043e\u0432\u043e\u043c \u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0438 \u044f \u0441\u0442\u0430\u0440\u0430\u044e\u0441\u044c \u043f\u0440\u043e\u0431\u043e\u0432\u0430\u0442\u044c \u043d\u043e\u0432\u044b\u0435 \u0431\u043b\u044e\u0434\u0430 \u0438\u0437 \u043c\u0435\u043d\u044e , \u043e\u0441\u043e\u0431\u0435\u043d\u043d\u043e \u043c\u043d\u0435 \u043f\u043e\u043d\u0440\u0430\u0432\u0438\u043b\u0438\u0441\u044c : \u0446\u0435\u0437\u0430\u0440\u044c \u0441 \u043a\u0443\u0440\u0438\u0446\u0435\u0439 , \u043b\u043e\u0441\u043e\u0441\u044c \u0441\u043e \u0448\u043f\u0438\u043d\u0430\u0442\u043d\u044b\u043c \u0441\u043e\u0443\u0441\u043e\u043c , \u0447\u0430\u0439 \u043e\u0431\u043b\u0435\u043f\u0438\u0445\u043e\u0432\u044b\u0439 \u0438 \u0442\u0438\u0440\u0430\u043c\u0438\u0441\u0443 \u043e\u0442 \u0448\u0435\u0444 \u043f\u043e\u0432\u0430\u0440\u0430 ."}, {"text": ", \u043d\u043e \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e \u0435\u0434\u044b \u0435\u0435 \u043d\u0435 \u0443\u043a\u0440\u0430\u0441\u0438\u043b\u043e:\u0421\u0432\u0430\u0434\u044c\u0431\u0430 , \u043a\u043e\u043d\u0435\u0447\u043d\u043e , \u043f\u0440\u043e\u0448\u043b\u0430 \u0432\u0435\u0441\u0435\u043b\u043e , \u043d\u043e \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u043e \u0435\u0434\u044b \u0435\u0435 \u043d\u0435 \u0443\u043a\u0440\u0430\u0441\u0438\u043b\u043e ."}, {"text": "\u043d\u0430\u0439\u0442\u0438 \u0443\u044e\u0442\u043d\u043e\u0435 \u043d\u0435\u0434\u043e\u0440\u043e\u0433\u043e\u0435 \u043c\u0435\u0441\u0442\u0435\u0447\u043a\u043e \u0432 \u0440\u0430\u0439\u043e\u043d\u0435 \u043c\u0435\u0442\u0440\u043e:\u0414\u0443\u043c\u0430\u044e , \u0435\u0441\u043b\u0438 \u0441\u0442\u043e\u0438\u0442 \u0437\u0430\u0434\u0430\u0447\u0430 \u043d\u0430\u0439\u0442\u0438 \u0443\u044e\u0442\u043d\u043e\u0435 \u043d\u0435\u0434\u043e\u0440\u043e\u0433\u043e\u0435 \u043c\u0435\u0441\u0442\u0435\u0447\u043a\u043e \u0432 \u0440\u0430\u0439\u043e\u043d\u0435 \u043c\u0435\u0442\u0440\u043e \u043c\u043e\u0441\u043a\u043e\u0432\u0441\u043a\u0430\u044f , \u0442\u043e \u044d\u0442\u043e \u043d\u0430\u0432\u0435\u0440\u043d\u043e \u043e\u0434\u0438\u043d \u0438\u0437 \u043b\u0443\u0447\u0448\u0438\u0445 \u0432\u0430\u0440\u0438\u0430\u043d\u0442\u043e\u0432 ."}, {"text": "\u043e\u043d\u0438 \u043d\u0430\u0447\u0430\u043b\u0438 \u0440\u0430\u0437\u043d\u043e\u043e\u0431\u0440\u0430\u0437\u0438\u0442\u044c \u043a\u0443\u0445\u043d\u044e \u043c\u044f\u0441\u043d\u044b\u043c\u0438 \u0431\u043b\u044e\u0434\u0430\u043c\u0438 ,:\u0425\u043e\u0447\u0435\u0442\u0441\u044f , \u0447\u0442\u043e\u0431\u044b \u043c\u043e\u0439 \u043e\u0442\u0437\u044b\u0432 \u0434\u043e\u0448\u0435\u043b \u0434\u043e \u0430\u0434\u043c\u0438\u043d\u0438\u0441\u0442\u0440\u0430\u0446\u0438\u0438 , \u0438 \u043e\u043d\u0438 \u043d\u0430\u0447\u0430\u043b\u0438 \u0440\u0430\u0437\u043d\u043e\u043e\u0431\u0440\u0430\u0437\u0438\u0442\u044c \u043a\u0443\u0445\u043d\u044e \u043c\u044f\u0441\u043d\u044b\u043c\u0438 \u0431\u043b\u044e\u0434\u0430\u043c\u0438 , \u0433\u0430\u0440\u043d\u0438\u0440\u0430\u043c\u0438 , \u0438\u043d\u0442\u0435\u0440\u0435\u0441\u043d\u044b\u043c\u0438 \u0441\u0430\u043b\u0430\u0442\u0430\u043c\u0438 \u0438 \u0441\u0443\u043f\u0430\u043c\u0438 ."}], "pipeline_tag": "text-classification", "inference": false} | isolation-forest/setfit-absa-polarity | null | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:cointegrated/rubert-tiny2",
"region:us"
] | null | 2024-04-16T14:27:35+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-cointegrated/rubert-tiny2 #region-us
| SetFit Polarity Model with cointegrated/rubert-tiny2
====================================================
This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses cointegrated/rubert-tiny2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. Use this SetFit model to classify the filtered aspect span candidates.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: cointegrated/rubert-tiny2
* Classification head: a LogisticRegression instance
* spaCy Model: ru\_core\_news\_lg
* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect
* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity
* Maximum Sequence Length: 2048 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 2)
* num\_epochs: (1, 16)
* max\_steps: -1
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.13
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* spaCy: 3.7.2
* Transformers: 4.39.3
* PyTorch: 2.1.2
* Datasets: 2.18.0
* Tokenizers: 0.15.2
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: cointegrated/rubert-tiny2\n* Classification head: a LogisticRegression instance\n* spaCy Model: ru\\_core\\_news\\_lg\n* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect\n* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity\n* Maximum Sequence Length: 2048 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.2\n* Transformers: 4.39.3\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-cointegrated/rubert-tiny2 #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: cointegrated/rubert-tiny2\n* Classification head: a LogisticRegression instance\n* spaCy Model: ru\\_core\\_news\\_lg\n* SetFitABSA Aspect Model: isolation-forest/setfit-absa-aspect\n* SetFitABSA Polarity Model: isolation-forest/setfit-absa-polarity\n* Maximum Sequence Length: 2048 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.2\n* Transformers: 4.39.3\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | kai-oh/mistral-7b-ift-best-hf | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:27:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cackerman/rewrites_gemma7_4bit_ft_full | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:28:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-29-layer](https://huggingface.co/Citaman/command-r-29-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-29-layer
layer_range: [0, 28]
- model: Citaman/command-r-29-layer
layer_range: [1, 29]
merge_method: slerp
base_model: Citaman/command-r-29-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-29-layer"]} | Citaman/command-r-28-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-29-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:32:49+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-29-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-29-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-29-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-29-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-29-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2.3epoci
This model is a fine-tuned version of [RoBerta-base](https://huggingface.co/RoBerta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6171
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "RoBerta-base", "model-index": [{"name": "results2.3epoci", "results": []}]} | dianamihalache27/results2.3epoci | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:RoBerta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:33:12+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-RoBerta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# results2.3epoci
This model is a fine-tuned version of RoBerta-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6171
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# results2.3epoci\n\nThis model is a fine-tuned version of RoBerta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6171\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-RoBerta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# results2.3epoci\n\nThis model is a fine-tuned version of RoBerta-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6171\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7365
- F1 Score: 0.7275
- Accuracy: 0.7275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5596 | 33.33 | 200 | 0.6108 | 0.7327 | 0.7328 |
| 0.3615 | 66.67 | 400 | 0.7353 | 0.7194 | 0.7194 |
| 0.2668 | 100.0 | 600 | 0.8690 | 0.7239 | 0.7241 |
| 0.1986 | 133.33 | 800 | 1.0061 | 0.7159 | 0.7174 |
| 0.1631 | 166.67 | 1000 | 1.0679 | 0.7347 | 0.7348 |
| 0.1336 | 200.0 | 1200 | 1.1490 | 0.7374 | 0.7375 |
| 0.1127 | 233.33 | 1400 | 1.2494 | 0.7277 | 0.7281 |
| 0.0995 | 266.67 | 1600 | 1.2434 | 0.7335 | 0.7335 |
| 0.0876 | 300.0 | 1800 | 1.2625 | 0.7372 | 0.7375 |
| 0.0773 | 333.33 | 2000 | 1.3646 | 0.7320 | 0.7321 |
| 0.0703 | 366.67 | 2200 | 1.3362 | 0.7338 | 0.7341 |
| 0.0645 | 400.0 | 2400 | 1.2613 | 0.7402 | 0.7401 |
| 0.0577 | 433.33 | 2600 | 1.3117 | 0.7351 | 0.7355 |
| 0.0538 | 466.67 | 2800 | 1.4113 | 0.7345 | 0.7348 |
| 0.0507 | 500.0 | 3000 | 1.4039 | 0.7442 | 0.7442 |
| 0.0454 | 533.33 | 3200 | 1.4591 | 0.7422 | 0.7422 |
| 0.0424 | 566.67 | 3400 | 1.4512 | 0.7448 | 0.7448 |
| 0.0401 | 600.0 | 3600 | 1.4655 | 0.7441 | 0.7442 |
| 0.0374 | 633.33 | 3800 | 1.5890 | 0.7412 | 0.7415 |
| 0.036 | 666.67 | 4000 | 1.5406 | 0.7422 | 0.7422 |
| 0.0332 | 700.0 | 4200 | 1.5786 | 0.7434 | 0.7435 |
| 0.0315 | 733.33 | 4400 | 1.5369 | 0.7453 | 0.7455 |
| 0.0297 | 766.67 | 4600 | 1.5281 | 0.7441 | 0.7442 |
| 0.0285 | 800.0 | 4800 | 1.4851 | 0.7508 | 0.7508 |
| 0.0278 | 833.33 | 5000 | 1.5988 | 0.7488 | 0.7488 |
| 0.026 | 866.67 | 5200 | 1.4406 | 0.7529 | 0.7528 |
| 0.0253 | 900.0 | 5400 | 1.6166 | 0.7500 | 0.7502 |
| 0.0244 | 933.33 | 5600 | 1.6700 | 0.7480 | 0.7482 |
| 0.023 | 966.67 | 5800 | 1.6632 | 0.7420 | 0.7428 |
| 0.0218 | 1000.0 | 6000 | 1.6396 | 0.7484 | 0.7488 |
| 0.0208 | 1033.33 | 6200 | 1.6999 | 0.7504 | 0.7508 |
| 0.0202 | 1066.67 | 6400 | 1.6575 | 0.7515 | 0.7515 |
| 0.0198 | 1100.0 | 6600 | 1.5728 | 0.7500 | 0.7502 |
| 0.0192 | 1133.33 | 6800 | 1.6903 | 0.7464 | 0.7468 |
| 0.0186 | 1166.67 | 7000 | 1.6586 | 0.7412 | 0.7422 |
| 0.0172 | 1200.0 | 7200 | 1.6625 | 0.7512 | 0.7515 |
| 0.0177 | 1233.33 | 7400 | 1.6543 | 0.7506 | 0.7508 |
| 0.0167 | 1266.67 | 7600 | 1.5729 | 0.7515 | 0.7515 |
| 0.0165 | 1300.0 | 7800 | 1.6838 | 0.7541 | 0.7542 |
| 0.0156 | 1333.33 | 8000 | 1.6712 | 0.7494 | 0.7495 |
| 0.0155 | 1366.67 | 8200 | 1.7200 | 0.7486 | 0.7488 |
| 0.0153 | 1400.0 | 8400 | 1.6690 | 0.7473 | 0.7475 |
| 0.0145 | 1433.33 | 8600 | 1.7480 | 0.7491 | 0.7495 |
| 0.0142 | 1466.67 | 8800 | 1.6859 | 0.7521 | 0.7522 |
| 0.0143 | 1500.0 | 9000 | 1.7143 | 0.7527 | 0.7528 |
| 0.0136 | 1533.33 | 9200 | 1.7057 | 0.7555 | 0.7555 |
| 0.0132 | 1566.67 | 9400 | 1.7074 | 0.7519 | 0.7522 |
| 0.0127 | 1600.0 | 9600 | 1.7604 | 0.7473 | 0.7475 |
| 0.0133 | 1633.33 | 9800 | 1.6914 | 0.7534 | 0.7535 |
| 0.0127 | 1666.67 | 10000 | 1.7151 | 0.7548 | 0.7548 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T14:33:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3-seqsight\_16384\_512\_56M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7365
* F1 Score: 0.7275
* Accuracy: 0.7275
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-squadcovid
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9162
- eval_runtime: 146.4405
- eval_samples_per_second: 73.682
- eval_steps_per_second: 2.308
- epoch: 1.27
- step: 6000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "base_model": "deepset/roberta-base-squad2", "model-index": [{"name": "roberta-finetuned-squadcovid", "results": []}]} | Rahul13/roberta-finetuned-squadcovid | null | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:38:04+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us
|
# roberta-finetuned-squadcovid
This model is a fine-tuned version of deepset/roberta-base-squad2 on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9162
- eval_runtime: 146.4405
- eval_samples_per_second: 73.682
- eval_steps_per_second: 2.308
- epoch: 1.27
- step: 6000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# roberta-finetuned-squadcovid\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.9162\n- eval_runtime: 146.4405\n- eval_samples_per_second: 73.682\n- eval_steps_per_second: 2.308\n- epoch: 1.27\n- step: 6000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# roberta-finetuned-squadcovid\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.9162\n- eval_runtime: 146.4405\n- eval_samples_per_second: 73.682\n- eval_steps_per_second: 2.308\n- epoch: 1.27\n- step: 6000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | Nour0707/mistral_tr2_text_to_sql | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-16T14:38:20+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers | 
# Arithmo-Wizard-2-7B
Arithmo-Wizard-2-7B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
* [upaya07/Arithmo2-Mistral-7B](https://huggingface.co/upaya07/Arithmo2-Mistral-7B)
## 🧩 Configuration
```yamlname: Arithmo-Wizard-2-7B
base_model:
model:
path: lucyknada/microsoft_WizardLM-2-7B
dtype: float16
merge_method: dare_linear
parameters:
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: lucyknada/microsoft_WizardLM-2-7B
- layer_range: [0, 32]
model:
model:
path: upaya07/Arithmo2-Mistral-7B
parameters:
weight: 0.5
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Arithmo-Wizard-2-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Since the base model uses vicuna format, it works pretty well as well
```
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Arithmo-Wizard-2-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
def format_prompt(prompt: str) -> str:
text = f"""
### Human: {prompt}
### Assistant:
"""
return text.strip()
tokenizer = AutoTokenizer.from_pretrained(model)
# prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
prompt = format_prompt("Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need?")
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Sample Runs
```
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading checkpoint shards: 100%|███████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.38s/it]
### Human: Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need?
### Assistant:
To find the total number of apples needed, we can use the formula for the sum of an arithmetic series. The formula is:
Sum = (n/2) * (2a + (n-1)d)
where n is the number of terms, a is the first term, and d is the common difference.
In this case, n = 10, a = 1, and d = 1 (since each child gets one more apple than the previous child).
Let's plug in the values into the formula:
Sum = (10/2) * (2*1 + (10-1)*1)
Sum = 5 * (2 + 9)
Sum = 5 * 11
Sum = 55
Therefore, you need 55 apples in total.
### Human: 55 apples. Thanks!
### Assistant: You're welcome!
```
## Evaluation Results
https://github.com/saucam/model_evals/tree/main/saucam/Arithmo-Wizard-2-7B | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lucyknada/microsoft_WizardLM-2-7B", "upaya07/Arithmo2-Mistral-7B"], "base_model": ["lucyknada/microsoft_WizardLM-2-7B"]} | saucam/Arithmo-Wizard-2-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lucyknada/microsoft_WizardLM-2-7B",
"upaya07/Arithmo2-Mistral-7B",
"conversational",
"base_model:lucyknada/microsoft_WizardLM-2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:39:46+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lucyknada/microsoft_WizardLM-2-7B #upaya07/Arithmo2-Mistral-7B #conversational #base_model-lucyknada/microsoft_WizardLM-2-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|  on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3505
- Wer: 0.3279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9451 | 1.0 | 278 | 2.9182 | 1.0 |
| 2.87 | 2.0 | 556 | 2.7116 | 1.0 |
| 1.1102 | 3.0 | 834 | 0.6030 | 0.5907 |
| 0.6952 | 4.0 | 1112 | 0.4691 | 0.4755 |
| 0.5976 | 5.0 | 1390 | 0.4316 | 0.4263 |
| 0.4842 | 6.0 | 1668 | 0.3887 | 0.3842 |
| 0.4444 | 7.0 | 1946 | 0.3722 | 0.3670 |
| 0.4221 | 8.0 | 2224 | 0.3721 | 0.3538 |
| 0.3929 | 9.0 | 2502 | 0.3527 | 0.3463 |
| 0.3611 | 10.0 | 2780 | 0.3538 | 0.3386 |
| 0.3669 | 11.0 | 3058 | 0.3513 | 0.3303 |
| 0.3517 | 12.0 | 3336 | 0.3505 | 0.3279 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_13_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod14", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_13_0", "type": "common_voice_13_0", "config": "id", "split": "test", "args": "id"}, "metrics": [{"type": "wer", "value": 0.32789454277286134, "name": "Wer"}]}]}]} | EzraWilliam/wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod14 | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:39:50+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| wav2vec2-xlsr-53-CV-demo-google-colab-Ezra\_William\_Prod14
===========================================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice\_13\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3505
* Wer: 0.3279
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# PatchTST model pre-trained on ETTh1 dataset
<!-- Provide a quick summary of what the model is/does. -->
[`PatchTST`](https://huggingface.co/docs/transformers/model_doc/patchtst) is a transformer-based model for time series modeling tasks, including forecasting, regression, and classification. This repository contains a pre-trained `PatchTST` model encompassing all seven channels of the `ETTh1` dataset.
This particular pre-trained model produces a Mean Squared Error (MSE) of 0.3881 on the `test` split of the `ETTh1` dataset when forecasting 96 hours into the future with a historical data window of 512 hours.
For training and evaluating a `PatchTST` model, you can refer to this [demo notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb).
## Model Details
### Model Description
The `PatchTST` model was proposed in A Time Series is Worth [64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head.
The model is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. The patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models.
In addition, PatchTST has a modular design to seamlessly support masked time series pre-training as well as direct time series forecasting, classification, and regression.
<img src="patchtst_architecture.png" alt="Architecture" width="600" />
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [PatchTST Hugging Face](https://huggingface.co/docs/transformers/model_doc/patchtst)
- **Paper:** [PatchTST ICLR 2023 paper](https://dl.acm.org/doi/abs/10.1145/3580305.3599533)
- **Demo:** [Get started with PatchTST](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This pre-trained model can be employed for fine-tuning or evaluation using any Electrical Transformer dataset that has the same channels as the `ETTh1` dataset, specifically: `HUFL, HULL, MUFL, MULL, LUFL, LULL, OT`. The model is designed to predict the next 96 hours based on the input values from the preceding 512 hours. It is crucial to normalize the data. For a more comprehensive understanding of data pre-processing, please consult the paper or the demo.
## How to Get Started with the Model
Use the code below to get started with the model.
[Demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb)
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[`ETTh1`/train split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv).
Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training Results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4306 | 1.0 | 1005 | 0.7268 |
| 0.3641 | 2.0 | 2010 | 0.7456 |
| 0.348 | 3.0 | 3015 | 0.7161 |
| 0.3379 | 4.0 | 4020 | 0.7428 |
| 0.3284 | 5.0 | 5025 | 0.7681 |
| 0.321 | 6.0 | 6030 | 0.7842 |
| 0.314 | 7.0 | 7035 | 0.7991 |
| 0.3088 | 8.0 | 8040 | 0.8021 |
| 0.3053 | 9.0 | 9045 | 0.8199 |
| 0.3019 | 10.0 | 10050 | 0.8173 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data
[`ETTh1`/test split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv).
Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb).
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Mean Squared Error (MSE).
### Results
It achieves a MSE of 0.3881 on the evaluation dataset.
#### Hardware
1 NVIDIA A100 GPU
#### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.14.1
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{nie2023time,
title={A Time Series is Worth 64 Words: Long-term Forecasting with Transformers},
author={Yuqi Nie and Nam H. Nguyen and Phanwadee Sinthong and Jayant Kalagnanam},
year={2023},
eprint={2211.14730},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
**APA:**
```
Nie, Y., Nguyen, N., Sinthong, P., & Kalagnanam, J. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. arXiv preprint arXiv:2211.14730.
``` | {"license": "cdla-permissive-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "patchtst_etth1_forecast", "results": []}]} | chungimungi/PatchTST-2-input-channels | null | [
"transformers",
"safetensors",
"patchtst",
"generated_from_trainer",
"arxiv:2211.14730",
"license:cdla-permissive-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:40:54+00:00 | [
"2211.14730"
] | [] | TAGS
#transformers #safetensors #patchtst #generated_from_trainer #arxiv-2211.14730 #license-cdla-permissive-2.0 #endpoints_compatible #region-us
| PatchTST model pre-trained on ETTh1 dataset
===========================================
'PatchTST' is a transformer-based model for time series modeling tasks, including forecasting, regression, and classification. This repository contains a pre-trained 'PatchTST' model encompassing all seven channels of the 'ETTh1' dataset.
This particular pre-trained model produces a Mean Squared Error (MSE) of 0.3881 on the 'test' split of the 'ETTh1' dataset when forecasting 96 hours into the future with a historical data window of 512 hours.
For training and evaluating a 'PatchTST' model, you can refer to this demo notebook.
Model Details
-------------
### Model Description
The 'PatchTST' model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head.
The model is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. The patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models.
In addition, PatchTST has a modular design to seamlessly support masked time series pre-training as well as direct time series forecasting, classification, and regression.

### Model Sources
* Repository: PatchTST Hugging Face
* Paper: PatchTST ICLR 2023 paper
* Demo: Get started with PatchTST
Uses
----
This pre-trained model can be employed for fine-tuning or evaluation using any Electrical Transformer dataset that has the same channels as the 'ETTh1' dataset, specifically: 'HUFL, HULL, MUFL, MULL, LUFL, LULL, OT'. The model is designed to predict the next 96 hours based on the input values from the preceding 512 hours. It is crucial to normalize the data. For a more comprehensive understanding of data pre-processing, please consult the paper or the demo.
How to Get Started with the Model
---------------------------------
Use the code below to get started with the model.
Demo
Training Details
----------------
### Training Data
'ETTh1'/train split.
Train/validation/test splits are shown in the demo.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training Results
Evaluation
----------
### Testing Data
'ETTh1'/test split.
Train/validation/test splits are shown in the demo.
### Metrics
Mean Squared Error (MSE).
### Results
It achieves a MSE of 0.3881 on the evaluation dataset.
#### Hardware
1 NVIDIA A100 GPU
#### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.0.1
* Datasets 2.14.4
* Tokenizers 0.14.1
BibTeX:
APA:
| [
"### Model Description\n\n\nThe 'PatchTST' model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.\n\n\nAt a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head.\n\n\nThe model is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. The patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models.\n\n\nIn addition, PatchTST has a modular design to seamlessly support masked time series pre-training as well as direct time series forecasting, classification, and regression.\n\n\n",
"### Model Sources\n\n\n* Repository: PatchTST Hugging Face\n* Paper: PatchTST ICLR 2023 paper\n* Demo: Get started with PatchTST\n\n\nUses\n----\n\n\nThis pre-trained model can be employed for fine-tuning or evaluation using any Electrical Transformer dataset that has the same channels as the 'ETTh1' dataset, specifically: 'HUFL, HULL, MUFL, MULL, LUFL, LULL, OT'. The model is designed to predict the next 96 hours based on the input values from the preceding 512 hours. It is crucial to normalize the data. For a more comprehensive understanding of data pre-processing, please consult the paper or the demo.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nDemo\n\n\nTraining Details\n----------------",
"### Training Data\n\n\n'ETTh1'/train split.\nTrain/validation/test splits are shown in the demo.",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training Results\n\n\n\nEvaluation\n----------",
"### Testing Data\n\n\n'ETTh1'/test split.\nTrain/validation/test splits are shown in the demo.",
"### Metrics\n\n\nMean Squared Error (MSE).",
"### Results\n\n\nIt achieves a MSE of 0.3881 on the evaluation dataset.",
"#### Hardware\n\n\n1 NVIDIA A100 GPU",
"#### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.0.1\n* Datasets 2.14.4\n* Tokenizers 0.14.1\n\n\nBibTeX:\n\n\nAPA:"
] | [
"TAGS\n#transformers #safetensors #patchtst #generated_from_trainer #arxiv-2211.14730 #license-cdla-permissive-2.0 #endpoints_compatible #region-us \n",
"### Model Description\n\n\nThe 'PatchTST' model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.\n\n\nAt a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head.\n\n\nThe model is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. The patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models.\n\n\nIn addition, PatchTST has a modular design to seamlessly support masked time series pre-training as well as direct time series forecasting, classification, and regression.\n\n\n",
"### Model Sources\n\n\n* Repository: PatchTST Hugging Face\n* Paper: PatchTST ICLR 2023 paper\n* Demo: Get started with PatchTST\n\n\nUses\n----\n\n\nThis pre-trained model can be employed for fine-tuning or evaluation using any Electrical Transformer dataset that has the same channels as the 'ETTh1' dataset, specifically: 'HUFL, HULL, MUFL, MULL, LUFL, LULL, OT'. The model is designed to predict the next 96 hours based on the input values from the preceding 512 hours. It is crucial to normalize the data. For a more comprehensive understanding of data pre-processing, please consult the paper or the demo.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nUse the code below to get started with the model.\n\n\nDemo\n\n\nTraining Details\n----------------",
"### Training Data\n\n\n'ETTh1'/train split.\nTrain/validation/test splits are shown in the demo.",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training Results\n\n\n\nEvaluation\n----------",
"### Testing Data\n\n\n'ETTh1'/test split.\nTrain/validation/test splits are shown in the demo.",
"### Metrics\n\n\nMean Squared Error (MSE).",
"### Results\n\n\nIt achieves a MSE of 0.3881 on the evaluation dataset.",
"#### Hardware\n\n\n1 NVIDIA A100 GPU",
"#### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.0.1\n* Datasets 2.14.4\n* Tokenizers 0.14.1\n\n\nBibTeX:\n\n\nAPA:"
] |
fill-mask | transformers | We introduce BERTurk-Legal which is a transformer-based language model to retrieve prior legal cases. BERTurk-Legal is pre-trained on a dataset from the Turkish legal domain. This dataset does not contain any labels related to the prior court case retrieval task. Masked language modeling is used to train BERTurk-Legal in a self-supervised manner. With zero-shot classification, BERTurk-Legal provides state-of-the-art results on the dataset consisting of legal cases of the Court of Cassation of Turkey. The results of the experiments show the necessity of developing language models specific to the Turkish law domain. Details of BERTurk-Legal can be found in the paper mentioned in the Citation section below.
Test dataset can be accessed from the following link: https://github.com/koc-lab/yargitay_retrieval_dataset
The model can be loaded and used to create document embeddings as follows. Then, the document embeddings can be utilized for retrieval.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
bert_model = "KocLab-Bilkent/BERTurk-Legal"
model = AutoModelForSequenceClassification.from_pretrained(bert_model, output_hidden_states=True)
tokenizer = AutoTokenizer.from_pretrained(bert_model)
tokens = tokenizer("Örnek metin") # a dummy text is provided as input
output = model(tokens)
docEmbeddings = output.hidden_states[-1]
```
## Citation
If you use the model, please cite the following conference paper.
```
@inproceedings{ozturk23berturkLegal,
author={\"{O}zt\"{u}rk, Ceyhun E. and \"{O}z\c{c}elik, {\c{S}}. Bar{\i}\c{s} and Aykut Ko\c{c}},
booktitle={2023 31st Signal Processing and Communications Applications Conference (SIU)},
title={{A Transformer-Based Prior Legal Case Retrieval Method}},
year={2023},
volume={},
number={},
pages={1-4}
}
``` | {"language": ["tr"], "license": "mit", "tags": ["legal"], "metrics": ["f1", "precision", "recall"]} | KocLab-Bilkent/BERTurk-Legal | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"legal",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:41:21+00:00 | [] | [
"tr"
] | TAGS
#transformers #pytorch #bert #fill-mask #legal #tr #license-mit #autotrain_compatible #endpoints_compatible #region-us
| We introduce BERTurk-Legal which is a transformer-based language model to retrieve prior legal cases. BERTurk-Legal is pre-trained on a dataset from the Turkish legal domain. This dataset does not contain any labels related to the prior court case retrieval task. Masked language modeling is used to train BERTurk-Legal in a self-supervised manner. With zero-shot classification, BERTurk-Legal provides state-of-the-art results on the dataset consisting of legal cases of the Court of Cassation of Turkey. The results of the experiments show the necessity of developing language models specific to the Turkish law domain. Details of BERTurk-Legal can be found in the paper mentioned in the Citation section below.
Test dataset can be accessed from the following link: URL
The model can be loaded and used to create document embeddings as follows. Then, the document embeddings can be utilized for retrieval.
If you use the model, please cite the following conference paper.
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #legal #tr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dipan19/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6681
- Validation Loss: 1.9723
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5407 | 2.3964 | 0 |
| 1.9417 | 1.9723 | 1 |
| 1.6681 | 1.9723 | 2 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "Dipan19/my_awesome_qa_model", "results": []}]} | Dipan19/my_awesome_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:41:26+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| Dipan19/my\_awesome\_qa\_model
==============================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 1.6681
* Validation Loss: 1.9723
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YulaySuyundukov/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]} | YulaySuyundukov/Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-16T14:41:50+00:00 | [] | [] | TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
| [
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] | [
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
parameters:
density: 0.53
weight: 0.33333333
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.53
weight: 0.33333333
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.53
weight: 0.33333333
merge_method: dare_ties
base_model: amazingvince/Not-WizardLM-2-7B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["MaziyarPanahi/Calme-7B-Instruct-v0.9", "amazingvince/Not-WizardLM-2-7B", "Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]} | mergekit-community/mergekit-dare_ties-ymiqjtz | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:amazingvince/Not-WizardLM-2-7B",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:42:02+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #base_model-amazingvince/Not-WizardLM-2-7B #base_model-Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.
### Models Merged
The following models were included in the merge:
* MaziyarPanahi/Calme-7B-Instruct-v0.9
* Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.9\n* Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.9 #base_model-amazingvince/Not-WizardLM-2-7B #base_model-Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.9\n* Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Grayx/sad_pepe_31 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T14:42:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | redmojo7/gemma-2b-it-finetune-palo-alto-network-auto-1000 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:43:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_16384_512_56M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6760
- F1 Score: 0.5840
- Accuracy: 0.5871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6799 | 14.29 | 200 | 0.6625 | 0.6136 | 0.6170 |
| 0.6028 | 28.57 | 400 | 0.6985 | 0.6013 | 0.6021 |
| 0.5548 | 42.86 | 600 | 0.7295 | 0.5944 | 0.5950 |
| 0.5176 | 57.14 | 800 | 0.7481 | 0.5936 | 0.5953 |
| 0.4818 | 71.43 | 1000 | 0.8237 | 0.5855 | 0.5850 |
| 0.4539 | 85.71 | 1200 | 0.8276 | 0.5905 | 0.5909 |
| 0.4307 | 100.0 | 1400 | 0.8661 | 0.5784 | 0.5786 |
| 0.4094 | 114.29 | 1600 | 0.8821 | 0.5820 | 0.5856 |
| 0.389 | 128.57 | 1800 | 0.9149 | 0.5856 | 0.5880 |
| 0.3706 | 142.86 | 2000 | 0.9267 | 0.5896 | 0.5891 |
| 0.3538 | 157.14 | 2200 | 0.9702 | 0.5902 | 0.5900 |
| 0.3393 | 171.43 | 2400 | 0.9930 | 0.5838 | 0.5836 |
| 0.3214 | 185.71 | 2600 | 1.0478 | 0.5836 | 0.5839 |
| 0.308 | 200.0 | 2800 | 1.0154 | 0.5806 | 0.5836 |
| 0.2977 | 214.29 | 3000 | 1.0711 | 0.5832 | 0.5830 |
| 0.2838 | 228.57 | 3200 | 1.0215 | 0.5928 | 0.5924 |
| 0.2734 | 242.86 | 3400 | 1.0757 | 0.5843 | 0.5842 |
| 0.2634 | 257.14 | 3600 | 1.0865 | 0.5801 | 0.5812 |
| 0.2507 | 271.43 | 3800 | 1.0849 | 0.5844 | 0.5853 |
| 0.2431 | 285.71 | 4000 | 1.0880 | 0.5834 | 0.5830 |
| 0.234 | 300.0 | 4200 | 1.1235 | 0.5857 | 0.5853 |
| 0.2281 | 314.29 | 4400 | 1.1215 | 0.5840 | 0.5842 |
| 0.2198 | 328.57 | 4600 | 1.1462 | 0.5819 | 0.5842 |
| 0.211 | 342.86 | 4800 | 1.1778 | 0.5828 | 0.5836 |
| 0.2062 | 357.14 | 5000 | 1.1501 | 0.5819 | 0.5836 |
| 0.1981 | 371.43 | 5200 | 1.1743 | 0.5865 | 0.5862 |
| 0.1924 | 385.71 | 5400 | 1.1945 | 0.5940 | 0.5944 |
| 0.1873 | 400.0 | 5600 | 1.1806 | 0.5925 | 0.5921 |
| 0.1817 | 414.29 | 5800 | 1.2352 | 0.5914 | 0.5909 |
| 0.1773 | 428.57 | 6000 | 1.2155 | 0.5941 | 0.5941 |
| 0.1719 | 442.86 | 6200 | 1.2319 | 0.5910 | 0.5909 |
| 0.1688 | 457.14 | 6400 | 1.2294 | 0.5890 | 0.5886 |
| 0.1634 | 471.43 | 6600 | 1.2611 | 0.5878 | 0.5880 |
| 0.1612 | 485.71 | 6800 | 1.2509 | 0.5903 | 0.5903 |
| 0.1572 | 500.0 | 7000 | 1.2765 | 0.5894 | 0.5897 |
| 0.1544 | 514.29 | 7200 | 1.2683 | 0.5908 | 0.5906 |
| 0.1518 | 528.57 | 7400 | 1.2918 | 0.5915 | 0.5915 |
| 0.1486 | 542.86 | 7600 | 1.3000 | 0.5891 | 0.5889 |
| 0.1456 | 557.14 | 7800 | 1.2692 | 0.5886 | 0.5886 |
| 0.1421 | 571.43 | 8000 | 1.3048 | 0.5884 | 0.5886 |
| 0.1397 | 585.71 | 8200 | 1.3104 | 0.5902 | 0.5906 |
| 0.1378 | 600.0 | 8400 | 1.3135 | 0.5892 | 0.5889 |
| 0.1377 | 614.29 | 8600 | 1.3035 | 0.5851 | 0.5848 |
| 0.1367 | 628.57 | 8800 | 1.2910 | 0.5882 | 0.5880 |
| 0.1338 | 642.86 | 9000 | 1.3204 | 0.5876 | 0.5880 |
| 0.1322 | 657.14 | 9200 | 1.3312 | 0.5888 | 0.5889 |
| 0.1304 | 671.43 | 9400 | 1.3203 | 0.5899 | 0.5903 |
| 0.1291 | 685.71 | 9600 | 1.3370 | 0.5895 | 0.5897 |
| 0.1277 | 700.0 | 9800 | 1.3259 | 0.5899 | 0.5900 |
| 0.1288 | 714.29 | 10000 | 1.3274 | 0.5893 | 0.5894 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-16T14:43:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4ac-seqsight\_16384\_512\_56M-L32\_all
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6760
* F1 Score: 0.5840
* Accuracy: 0.5871
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-2024-04-16-16-45-Cc29D
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1_5-2024-04-16-16-45-Cc29D", "results": []}]} | frenkd/phi-1_5-2024-04-16-16-45-Cc29D | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-04-16T14:45:15+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us
|
# phi-1_5-2024-04-16-16-45-Cc29D
This model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-1_5-2024-04-16-16-45-Cc29D\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-1_5 #license-mit #region-us \n",
"# phi-1_5-2024-04-16-16-45-Cc29D\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | null |
# Japanese-WizardLM2-ChatV-7B-GGUF
GGUF conversion of "Japanese-WizardLM2-ChatV-7B"
This model, Japanese-WizardLM2-ChatV-7B, is based on "chatntq-ja-7b-v1.0 ", and was created by subtracting "Mistral-7B-v0.1" from "WizardLM-2-7b" ChatVector was added by a factor of 1.0.
We aimed to add the high performance of WizardLM-2 to the Japanese language capability of ChatNTQ.
このモデル、Japanese-WizardLM2-ChatV-7Bは、”chatntq-ja-7b-v1.0”をベースに、"WizardLM-2-7b"から"Mistral-7B-v0.1"を差し引いて作ったChatVectorを1.0倍で足しました。
ChatNTQの日本語能力にWizardLM-2の性能の高さが加わる事を狙いました。
### Performance
<table>
<tr>
<th>Model</th>
<th><a href="https://huggingface.co/CohereForAI/c4ai-command-r-plus">c4ai-command-r-plus(Cohere API)</a></th>
<th><a href="https://huggingface.co/TFMC/Japanese-Starling-ChatV-7B-GGUF">JA-Starling-ChatV-7B-GGUF(fp16)</th>
<th>JA-WizardLM2-ChatV-7B-GGUF (This model)(fp16)</th>
<th><a href="https://huggingface.co/TFMC/ChatNTQ-JA-7b-v1.0-GGUF">ChatNTQ-JA-7b-v1.0-GGUF(Q8_0)</a></th>
</tr>
<tr>
<td>Parameters</td>
<td>104B</td>
<td>7B(Mistral)</td>
<td>7B(Mistral)</td>
<td>7B(Mistral)</td>
</tr>
<tr>
<td>ELYZAtasks100<br>average score</td>
<td>4.04</td>
<td>3.77</td>
<td>3.40</td>
<td>2.74</td>
</tr>
</table>
This score was evaluated by Cohere API command-r-plus using the "<a href="https://huggingface.co/datasets/elyza/ELYZA-tasks-100">ELYZA-tasks-100</a>", a Japanese model with instruction-tuning.
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、Cohere APIのcommand-r-plusにより評価させたものです。
### Prompt Template
- Llama-2-Chat
<pre><code>[INST] <<SYS>>\nあなたは役に立つアシスタントです。\n<</SYS>>\n\n{prompt} [/INST]</code></pre>
| {"language": ["ja"], "license": "apache-2.0", "tags": ["Mistral"], "pipeline_tag": "text-generation"} | umiyuki/Japanese-WizardLM2-ChatV-7B-GGUF | null | [
"gguf",
"Mistral",
"text-generation",
"ja",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T14:45:30+00:00 | [] | [
"ja"
] | TAGS
#gguf #Mistral #text-generation #ja #license-apache-2.0 #region-us
| Japanese-WizardLM2-ChatV-7B-GGUF
================================
GGUF conversion of "Japanese-WizardLM2-ChatV-7B"
This model, Japanese-WizardLM2-ChatV-7B, is based on "chatntq-ja-7b-v1.0 ", and was created by subtracting "Mistral-7B-v0.1" from "WizardLM-2-7b" ChatVector was added by a factor of 1.0.
We aimed to add the high performance of WizardLM-2 to the Japanese language capability of ChatNTQ.
このモデル、Japanese-WizardLM2-ChatV-7Bは、”chatntq-ja-7b-v1.0”をベースに、"WizardLM-2-7b"から"Mistral-7B-v0.1"を差し引いて作ったChatVectorを1.0倍で足しました。
ChatNTQの日本語能力にWizardLM-2の性能の高さが加わる事を狙いました。
### Performance
This score was evaluated by Cohere API command-r-plus using the "<a href="URL a Japanese model with instruction-tuning.
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、Cohere APIのcommand-r-plusにより評価させたものです。
### Prompt Template
* Llama-2-Chat
```
[INST] <<SYS>>\nあなたは役に立つアシスタントです。\n<</SYS>>\n\n{prompt} [/INST]
```
| [
"### Performance\n\n\n\nThis score was evaluated by Cohere API command-r-plus using the \"<a href=\"URL a Japanese model with instruction-tuning.\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、Cohere APIのcommand-r-plusにより評価させたものです。",
"### Prompt Template\n\n\n* Llama-2-Chat\n\n\n\n```\n[INST] <<SYS>>\\nあなたは役に立つアシスタントです。\\n<</SYS>>\\n\\n{prompt} [/INST]\n```"
] | [
"TAGS\n#gguf #Mistral #text-generation #ja #license-apache-2.0 #region-us \n",
"### Performance\n\n\n\nThis score was evaluated by Cohere API command-r-plus using the \"<a href=\"URL a Japanese model with instruction-tuning.\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、Cohere APIのcommand-r-plusにより評価させたものです。",
"### Prompt Template\n\n\n* Llama-2-Chat\n\n\n\n```\n[INST] <<SYS>>\\nあなたは役に立つアシスタントです。\\n<</SYS>>\\n\\n{prompt} [/INST]\n```"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-28-layer](https://huggingface.co/Citaman/command-r-28-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-28-layer
layer_range: [0, 27]
- model: Citaman/command-r-28-layer
layer_range: [1, 28]
merge_method: slerp
base_model: Citaman/command-r-28-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-28-layer"]} | Citaman/command-r-27-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-28-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T14:47:40+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-28-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-28-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-28-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-28-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-28-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | mlx |
# mlx-community/nanoLLaVA
This model was converted to MLX format from [`qnguyen3/nanoLLaVA`]() using mlx-vllm version **0.0.3**.
Refer to the [original model card](https://huggingface.co/qnguyen3/nanoLLaVA) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/nanoLLaVA \
--prompt "what are these?" --image "http://images.cocodataset.org/val2017/000000039769.jpg" \
--max-tokens 100 --temp 0.0
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["llava", "multimodal", "qwen", "mlx"]} | mlx-community/nanoLLaVA | null | [
"mlx",
"safetensors",
"llava-qwen2",
"llava",
"multimodal",
"qwen",
"custom_code",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T14:47:42+00:00 | [] | [
"en"
] | TAGS
#mlx #safetensors #llava-qwen2 #llava #multimodal #qwen #custom_code #en #license-apache-2.0 #region-us
|
# mlx-community/nanoLLaVA
This model was converted to MLX format from ['qnguyen3/nanoLLaVA']() using mlx-vllm version 0.0.3.
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/nanoLLaVA\nThis model was converted to MLX format from ['qnguyen3/nanoLLaVA']() using mlx-vllm version 0.0.3.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #llava-qwen2 #llava #multimodal #qwen #custom_code #en #license-apache-2.0 #region-us \n",
"# mlx-community/nanoLLaVA\nThis model was converted to MLX format from ['qnguyen3/nanoLLaVA']() using mlx-vllm version 0.0.3.\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.