modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-30 06:28:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 536
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-30 06:28:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hansanguw/HSCho_test
|
hansanguw
| 2023-07-17T01:26:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:26:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3
|
KingKazma
| 2023-07-17T01:12:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:12:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T01:05:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:05:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e5_s6789_v3
|
KingKazma
| 2023-07-17T00:58:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:58:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
timjwhite/poca-SoccerTwos
|
timjwhite
| 2023-07-17T00:56:31Z | 66 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-17T00:45:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: timjwhite/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
arashaomrani/Email
|
arashaomrani
| 2023-07-17T00:46:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:45:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e3_s6789_v3
|
KingKazma
| 2023-07-17T00:44:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:44:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e2_s6789_v3
|
KingKazma
| 2023-07-17T00:37:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:37:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-17T00:30:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:30:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
acasany/rare-puppers
|
acasany
| 2023-07-17T00:27:57Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T00:27:47Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8876404762268066
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### samoyed

#### shiba inu

|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3
|
KingKazma
| 2023-07-17T00:23:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:23:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
lucostiguy11/dreambooth_if
|
lucostiguy11
| 2023-07-17T00:21:21Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:DeepFloyd/IF-I-XL-v1.0",
"base_model:finetune:DeepFloyd/IF-I-XL-v1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:IFPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T23:29:26Z |
---
license: creativeml-openrail-m
base_model: DeepFloyd/IF-I-XL-v1.0
instance_prompt: a photo of sks dog
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lucostiguy11/dreambooth_if
This is a dreambooth model derived from DeepFloyd/IF-I-XL-v1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-17T00:16:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:16:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3
|
KingKazma
| 2023-07-17T00:09:01Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:09:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
dsmonk/xgen-7b-tuned-alpaca
|
dsmonk
| 2023-07-17T00:04:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:Salesforce/xgen-7b-8k-base",
"base_model:finetune:Salesforce/xgen-7b-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2023-07-16T21:52:46Z |
---
license: apache-2.0
base_model: Salesforce/xgen-7b-8k-base
tags:
- generated_from_trainer
model-index:
- name: xgen-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xgen-7b-tuned-alpaca
This model is a fine-tuned version of [Salesforce/xgen-7b-8k-base](https://huggingface.co/Salesforce/xgen-7b-8k-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ByteExplorer/Reinforce-CartPole-8
|
ByteExplorer
| 2023-07-17T00:04:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T00:03:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T00:01:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:01:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s55555_v3
|
KingKazma
| 2023-07-16T23:55:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:55:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s55555_v3
|
KingKazma
| 2023-07-16T23:48:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:48:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3
|
KingKazma
| 2023-07-16T23:38:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:38:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
abgoswam/bloom_marketmail_32
|
abgoswam
| 2023-07-16T23:34:10Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:34:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s55555_v3
|
KingKazma
| 2023-07-16T23:20:02Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:20:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e-1_s6789_v3
|
KingKazma
| 2023-07-16T23:08:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T23:08:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/wtcn
|
ailabturkiye
| 2023-07-16T23:06:15Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T23:04:16Z |
---
license: openrail
language:
- tr
tags:
- music
---
|
NasimB/aochildes-guten-log-rarity-all-no-cut
|
NasimB
| 2023-07-16T22:59:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T20:50:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7164 | 0.29 | 500 | 5.6323 |
| 5.3447 | 0.59 | 1000 | 5.2052 |
| 5.0011 | 0.88 | 1500 | 4.9552 |
| 4.7272 | 1.17 | 2000 | 4.8144 |
| 4.5727 | 1.47 | 2500 | 4.6937 |
| 4.4591 | 1.76 | 3000 | 4.5928 |
| 4.3272 | 2.05 | 3500 | 4.5232 |
| 4.1423 | 2.35 | 4000 | 4.4760 |
| 4.1152 | 2.64 | 4500 | 4.4205 |
| 4.0725 | 2.93 | 5000 | 4.3703 |
| 3.8638 | 3.23 | 5500 | 4.3718 |
| 3.8167 | 3.52 | 6000 | 4.3411 |
| 3.7993 | 3.81 | 6500 | 4.3167 |
| 3.6795 | 4.11 | 7000 | 4.3235 |
| 3.5285 | 4.4 | 7500 | 4.3099 |
| 3.5218 | 4.69 | 8000 | 4.3012 |
| 3.5096 | 4.99 | 8500 | 4.2923 |
| 3.3413 | 5.28 | 9000 | 4.3116 |
| 3.3298 | 5.57 | 9500 | 4.3113 |
| 3.3314 | 5.87 | 10000 | 4.3111 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s55555_v3
|
KingKazma
| 2023-07-16T22:58:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:58:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s108_v3
|
KingKazma
| 2023-07-16T22:35:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:34:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Chickenfish/Jennie
|
Chickenfish
| 2023-07-16T22:30:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T01:54:48Z |
---
license: creativeml-openrail-m
---
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s108_v3
|
KingKazma
| 2023-07-16T22:28:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:28:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s108_v3
|
KingKazma
| 2023-07-16T22:20:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:20:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
jeremyvictor/t5-v1_1-large-fce-e8-b16
|
jeremyvictor
| 2023-07-16T22:19:24Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T15:25:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-v1_1-large-fce-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-v1_1-large-fce-e8-b16
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3349
- Rouge1: 86.6648
- Rouge2: 79.4505
- Rougel: 86.1654
- Rougelsum: 86.1549
- Gen Len: 14.9105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.325 | 0.06 | 100 | 0.7775 | 76.9422 | 69.1942 | 76.3689 | 76.3852 | 14.7545 |
| 0.9422 | 0.11 | 200 | 0.4327 | 85.6522 | 77.1791 | 85.0843 | 85.0849 | 15.0315 |
| 0.535 | 0.17 | 300 | 0.4081 | 85.8265 | 77.0897 | 85.2547 | 85.2421 | 14.8745 |
| 0.5003 | 0.23 | 400 | 0.4104 | 85.847 | 77.3884 | 85.3678 | 85.3536 | 14.8257 |
| 0.4734 | 0.28 | 500 | 0.3830 | 86.3501 | 78.2006 | 85.824 | 85.8541 | 14.8613 |
| 0.4439 | 0.34 | 600 | 0.3652 | 86.5106 | 78.4301 | 85.9794 | 85.9871 | 14.8644 |
| 0.4399 | 0.4 | 700 | 0.3656 | 86.3955 | 78.2086 | 85.8592 | 85.8785 | 14.8562 |
| 0.4259 | 0.45 | 800 | 0.3925 | 85.6654 | 77.0925 | 85.1468 | 85.1547 | 14.9142 |
| 0.4092 | 0.51 | 900 | 0.3720 | 86.317 | 78.3141 | 85.8151 | 85.7907 | 14.8859 |
| 0.4143 | 0.56 | 1000 | 0.3761 | 86.5432 | 78.4572 | 85.9424 | 85.9234 | 14.8763 |
| 0.4184 | 0.62 | 1100 | 0.3487 | 86.4053 | 78.5526 | 85.8508 | 85.8745 | 14.8909 |
| 0.4025 | 0.68 | 1200 | 0.3556 | 86.2418 | 78.2845 | 85.7291 | 85.7379 | 14.8603 |
| 0.4014 | 0.73 | 1300 | 0.3657 | 86.6544 | 78.9722 | 86.1314 | 86.1446 | 14.8257 |
| 0.379 | 0.79 | 1400 | 0.3512 | 86.6622 | 79.1939 | 86.1521 | 86.1383 | 14.8955 |
| 0.3898 | 0.85 | 1500 | 0.3517 | 86.1483 | 78.4144 | 85.5986 | 85.6256 | 14.8955 |
| 0.373 | 0.9 | 1600 | 0.3565 | 86.6775 | 79.0902 | 86.1475 | 86.156 | 14.8946 |
| 0.3685 | 0.96 | 1700 | 0.3500 | 86.8048 | 79.2231 | 86.2842 | 86.2602 | 14.8658 |
| 0.3353 | 1.02 | 1800 | 0.3547 | 86.7966 | 79.1526 | 86.2624 | 86.2769 | 14.8895 |
| 0.2323 | 1.07 | 1900 | 0.3529 | 86.6715 | 79.0832 | 86.1451 | 86.143 | 14.9119 |
| 0.2458 | 1.13 | 2000 | 0.3699 | 86.9553 | 79.3124 | 86.3906 | 86.4162 | 14.8987 |
| 0.2349 | 1.19 | 2100 | 0.3640 | 86.4161 | 78.4111 | 85.8783 | 85.8807 | 14.9420 |
| 0.2358 | 1.24 | 2200 | 0.3598 | 86.7842 | 79.1199 | 86.2164 | 86.2259 | 14.8932 |
| 0.2229 | 1.3 | 2300 | 0.3610 | 86.7032 | 79.0013 | 86.168 | 86.1807 | 14.8827 |
| 0.2502 | 1.35 | 2400 | 0.3527 | 86.5423 | 78.9113 | 86.0423 | 86.0465 | 14.8946 |
| 0.2466 | 1.41 | 2500 | 0.3575 | 86.512 | 78.7998 | 85.9795 | 85.9899 | 14.9142 |
| 0.2457 | 1.47 | 2600 | 0.3463 | 86.5376 | 78.7642 | 86.0019 | 85.993 | 14.8964 |
| 0.2429 | 1.52 | 2700 | 0.3480 | 86.5911 | 78.9802 | 86.0235 | 86.0303 | 14.9169 |
| 0.2657 | 1.58 | 2800 | 0.3423 | 86.6139 | 79.1659 | 86.0999 | 86.1034 | 14.8905 |
| 0.2542 | 1.64 | 2900 | 0.3439 | 86.4731 | 78.8656 | 86.0285 | 86.0336 | 14.8955 |
| 0.2529 | 1.69 | 3000 | 0.3491 | 86.7686 | 79.2799 | 86.2783 | 86.2663 | 14.8891 |
| 0.2475 | 1.75 | 3100 | 0.3460 | 86.0511 | 77.837 | 85.5557 | 85.56 | 14.8868 |
| 0.2472 | 1.81 | 3200 | 0.3375 | 86.6711 | 79.1718 | 86.1627 | 86.1402 | 14.8809 |
| 0.2432 | 1.86 | 3300 | 0.3349 | 86.6648 | 79.4505 | 86.1654 | 86.1549 | 14.9105 |
| 0.2467 | 1.92 | 3400 | 0.3383 | 86.867 | 79.7251 | 86.3823 | 86.3811 | 14.9014 |
| 0.2416 | 1.98 | 3500 | 0.3404 | 86.8577 | 79.4128 | 86.3474 | 86.3386 | 14.8909 |
| 0.1816 | 2.03 | 3600 | 0.3590 | 86.7414 | 79.4138 | 86.2395 | 86.2415 | 14.9283 |
| 0.1344 | 2.09 | 3700 | 0.3806 | 86.9318 | 79.5175 | 86.4098 | 86.4209 | 14.9238 |
| 0.134 | 2.14 | 3800 | 0.3704 | 86.733 | 79.2709 | 86.2066 | 86.2083 | 14.9379 |
| 0.1301 | 2.2 | 3900 | 0.3788 | 86.7622 | 79.4039 | 86.2608 | 86.2514 | 14.9133 |
| 0.1417 | 2.26 | 4000 | 0.3658 | 87.0002 | 79.8067 | 86.4663 | 86.4604 | 14.9105 |
| 0.1256 | 2.31 | 4100 | 0.3728 | 86.6691 | 79.3081 | 86.1154 | 86.1184 | 14.9119 |
| 0.1393 | 2.37 | 4200 | 0.3666 | 86.7525 | 79.3901 | 86.223 | 86.2348 | 14.9046 |
| 0.1542 | 2.43 | 4300 | 0.3740 | 86.6779 | 79.5336 | 86.1667 | 86.1716 | 14.9283 |
| 0.133 | 2.48 | 4400 | 0.3790 | 86.7692 | 79.6713 | 86.2335 | 86.2394 | 14.9457 |
| 0.1389 | 2.54 | 4500 | 0.3717 | 86.4853 | 79.3114 | 85.9253 | 85.9128 | 14.9434 |
| 0.1489 | 2.6 | 4600 | 0.3724 | 86.2107 | 78.63 | 85.6539 | 85.6792 | 14.9311 |
| 0.1522 | 2.65 | 4700 | 0.3647 | 86.8659 | 79.8 | 86.3545 | 86.3676 | 14.9160 |
| 0.1439 | 2.71 | 4800 | 0.3672 | 86.0554 | 78.1382 | 85.5587 | 85.5362 | 14.9297 |
| 0.1406 | 2.77 | 4900 | 0.3637 | 86.4054 | 78.9406 | 85.8958 | 85.9036 | 14.9069 |
| 0.1522 | 2.82 | 5000 | 0.3715 | 86.7402 | 79.6515 | 86.2414 | 86.2416 | 14.9201 |
| 0.1577 | 2.88 | 5100 | 0.3531 | 86.5905 | 79.2319 | 86.0746 | 86.0661 | 14.9174 |
| 0.1427 | 2.93 | 5200 | 0.3693 | 86.4955 | 79.0202 | 86.0034 | 85.9923 | 14.9014 |
| 0.1489 | 2.99 | 5300 | 0.3671 | 86.6285 | 79.2982 | 86.1429 | 86.1239 | 14.9366 |
| 0.0874 | 3.05 | 5400 | 0.4117 | 86.7939 | 79.6444 | 86.2987 | 86.292 | 14.9311 |
| 0.0824 | 3.1 | 5500 | 0.4056 | 86.7504 | 79.5265 | 86.2525 | 86.2509 | 14.9069 |
| 0.0815 | 3.16 | 5600 | 0.4064 | 86.9102 | 79.8072 | 86.4 | 86.3798 | 14.9188 |
| 0.0761 | 3.22 | 5700 | 0.4061 | 86.7759 | 79.4944 | 86.2642 | 86.2638 | 14.9156 |
| 0.0858 | 3.27 | 5800 | 0.4104 | 86.9783 | 79.7005 | 86.4405 | 86.4279 | 14.9206 |
| 0.0774 | 3.33 | 5900 | 0.4043 | 86.7749 | 79.4813 | 86.2355 | 86.2441 | 14.9010 |
| 0.0841 | 3.39 | 6000 | 0.4033 | 86.915 | 79.7145 | 86.3878 | 86.3809 | 14.9060 |
| 0.0885 | 3.44 | 6100 | 0.4066 | 86.761 | 79.3294 | 86.202 | 86.2041 | 14.8973 |
| 0.0794 | 3.5 | 6200 | 0.3987 | 86.699 | 79.2133 | 86.1431 | 86.1571 | 14.9083 |
| 0.0845 | 3.56 | 6300 | 0.4225 | 86.8629 | 79.4052 | 86.3102 | 86.32 | 14.9169 |
| 0.0869 | 3.61 | 6400 | 0.4033 | 86.8748 | 79.5928 | 86.3421 | 86.3564 | 14.8987 |
| 0.0791 | 3.67 | 6500 | 0.4055 | 86.9491 | 79.6876 | 86.4205 | 86.4281 | 14.9115 |
| 0.0849 | 3.72 | 6600 | 0.4068 | 86.7855 | 79.4848 | 86.2791 | 86.2945 | 14.9192 |
| 0.0865 | 3.78 | 6700 | 0.4069 | 86.7864 | 79.5128 | 86.2844 | 86.3027 | 14.9092 |
| 0.086 | 3.84 | 6800 | 0.3989 | 86.9556 | 79.6203 | 86.4463 | 86.4673 | 14.9083 |
| 0.0811 | 3.89 | 6900 | 0.3913 | 86.9815 | 79.7108 | 86.4913 | 86.4905 | 14.9073 |
| 0.0812 | 3.95 | 7000 | 0.4022 | 86.819 | 79.5024 | 86.313 | 86.336 | 14.9261 |
| 0.087 | 4.01 | 7100 | 0.4238 | 87.0628 | 79.8276 | 86.5385 | 86.5444 | 14.9133 |
| 0.0484 | 4.06 | 7200 | 0.4301 | 87.0455 | 79.7775 | 86.5274 | 86.5298 | 14.9023 |
| 0.0481 | 4.12 | 7300 | 0.4715 | 87.0629 | 79.9823 | 86.5676 | 86.5615 | 14.9073 |
| 0.0522 | 4.18 | 7400 | 0.4379 | 86.983 | 79.7011 | 86.4659 | 86.4906 | 14.9174 |
| 0.0463 | 4.23 | 7500 | 0.4574 | 87.047 | 79.6937 | 86.5243 | 86.5252 | 14.9133 |
| 0.0559 | 4.29 | 7600 | 0.4275 | 86.8511 | 79.4707 | 86.3482 | 86.3463 | 14.9270 |
| 0.0484 | 4.35 | 7700 | 0.4426 | 86.8238 | 79.4779 | 86.3242 | 86.3224 | 14.9178 |
| 0.0468 | 4.4 | 7800 | 0.4565 | 86.9331 | 79.7622 | 86.4253 | 86.433 | 14.9174 |
| 0.0501 | 4.46 | 7900 | 0.4506 | 86.884 | 79.7917 | 86.4025 | 86.4082 | 14.9160 |
| 0.0538 | 4.51 | 8000 | 0.4290 | 86.95 | 79.7812 | 86.4425 | 86.4387 | 14.9092 |
| 0.0499 | 4.57 | 8100 | 0.4366 | 87.1034 | 80.0115 | 86.6029 | 86.6075 | 14.9137 |
| 0.051 | 4.63 | 8200 | 0.4472 | 86.8904 | 79.6413 | 86.4313 | 86.4236 | 14.9078 |
| 0.0546 | 4.68 | 8300 | 0.4299 | 86.8704 | 79.6621 | 86.3474 | 86.3699 | 14.9055 |
| 0.049 | 4.74 | 8400 | 0.4601 | 87.0006 | 79.7754 | 86.4831 | 86.484 | 14.9073 |
| 0.0474 | 4.8 | 8500 | 0.4481 | 86.9629 | 79.7888 | 86.452 | 86.4605 | 14.9069 |
| 0.0509 | 4.85 | 8600 | 0.4329 | 86.9177 | 79.6544 | 86.4178 | 86.4215 | 14.9124 |
| 0.0521 | 4.91 | 8700 | 0.4323 | 86.8574 | 79.6029 | 86.3347 | 86.3477 | 14.9169 |
| 0.0458 | 4.97 | 8800 | 0.4563 | 87.0021 | 79.754 | 86.4522 | 86.4517 | 14.9105 |
| 0.0411 | 5.02 | 8900 | 0.4707 | 86.884 | 79.6339 | 86.3403 | 86.3413 | 14.9178 |
| 0.0283 | 5.08 | 9000 | 0.4809 | 86.9403 | 79.8934 | 86.4149 | 86.4145 | 14.9183 |
| 0.029 | 5.14 | 9100 | 0.4799 | 86.8942 | 79.7148 | 86.3502 | 86.3571 | 14.9064 |
| 0.0268 | 5.19 | 9200 | 0.4910 | 86.9841 | 79.8403 | 86.4605 | 86.4683 | 14.9233 |
| 0.0294 | 5.25 | 9300 | 0.4838 | 86.9494 | 79.9215 | 86.4508 | 86.4474 | 14.9151 |
| 0.028 | 5.3 | 9400 | 0.5042 | 87.1362 | 80.0747 | 86.6251 | 86.6238 | 14.9169 |
| 0.0291 | 5.36 | 9500 | 0.4997 | 87.0858 | 80.036 | 86.5966 | 86.5908 | 14.9087 |
| 0.0291 | 5.42 | 9600 | 0.4983 | 87.0756 | 79.9726 | 86.5872 | 86.5865 | 14.9037 |
| 0.0282 | 5.47 | 9700 | 0.5073 | 87.0901 | 79.8924 | 86.5942 | 86.595 | 14.8982 |
| 0.0299 | 5.53 | 9800 | 0.4945 | 87.145 | 79.9289 | 86.6143 | 86.6206 | 14.8987 |
| 0.0278 | 5.59 | 9900 | 0.5187 | 86.9691 | 79.7553 | 86.4589 | 86.4624 | 14.9051 |
| 0.0237 | 5.64 | 10000 | 0.5246 | 86.9827 | 79.7671 | 86.4783 | 86.4701 | 14.9119 |
| 0.03 | 5.7 | 10100 | 0.4944 | 87.0292 | 79.8105 | 86.4909 | 86.5016 | 14.9119 |
| 0.0289 | 5.76 | 10200 | 0.5131 | 87.0028 | 79.8731 | 86.5042 | 86.5187 | 14.9137 |
| 0.0296 | 5.81 | 10300 | 0.4963 | 87.1329 | 79.9334 | 86.6172 | 86.6194 | 14.9128 |
| 0.0287 | 5.87 | 10400 | 0.4893 | 87.0761 | 79.9902 | 86.5448 | 86.5427 | 14.9174 |
| 0.029 | 5.93 | 10500 | 0.4880 | 87.0082 | 79.8738 | 86.4987 | 86.4864 | 14.9105 |
| 0.0281 | 5.98 | 10600 | 0.4928 | 87.0415 | 79.8243 | 86.5291 | 86.5279 | 14.9206 |
| 0.0236 | 6.04 | 10700 | 0.5026 | 86.9936 | 79.8109 | 86.4741 | 86.4771 | 14.9165 |
| 0.0172 | 6.09 | 10800 | 0.5242 | 87.0859 | 80.0264 | 86.5787 | 86.5684 | 14.9178 |
| 0.0157 | 6.15 | 10900 | 0.5386 | 87.0647 | 80.1227 | 86.5723 | 86.5658 | 14.9197 |
| 0.0175 | 6.21 | 11000 | 0.5222 | 87.034 | 80.051 | 86.525 | 86.5177 | 14.9160 |
| 0.0155 | 6.26 | 11100 | 0.5445 | 87.0634 | 79.9564 | 86.5556 | 86.5507 | 14.9101 |
| 0.0147 | 6.32 | 11200 | 0.5602 | 87.0164 | 79.9748 | 86.505 | 86.4928 | 14.9105 |
| 0.0156 | 6.38 | 11300 | 0.5587 | 87.1387 | 79.9561 | 86.6298 | 86.6329 | 14.9137 |
| 0.0157 | 6.43 | 11400 | 0.5655 | 87.1027 | 80.1466 | 86.6023 | 86.5983 | 14.9201 |
| 0.0139 | 6.49 | 11500 | 0.5773 | 87.1318 | 80.1543 | 86.5965 | 86.6127 | 14.9251 |
| 0.0152 | 6.55 | 11600 | 0.5748 | 87.2417 | 80.2155 | 86.7204 | 86.7277 | 14.9128 |
| 0.0169 | 6.6 | 11700 | 0.5558 | 87.2049 | 80.1632 | 86.7078 | 86.7198 | 14.9042 |
| 0.0158 | 6.66 | 11800 | 0.5452 | 87.0358 | 79.9864 | 86.5181 | 86.5149 | 14.9151 |
| 0.0169 | 6.72 | 11900 | 0.5411 | 87.0557 | 79.9435 | 86.5372 | 86.5375 | 14.9087 |
| 0.0127 | 6.77 | 12000 | 0.5564 | 87.0617 | 80.0711 | 86.5398 | 86.5645 | 14.9051 |
| 0.0158 | 6.83 | 12100 | 0.5545 | 87.0269 | 80.0081 | 86.4936 | 86.5004 | 14.9247 |
| 0.0142 | 6.88 | 12200 | 0.5520 | 87.1107 | 80.1457 | 86.5775 | 86.5851 | 14.9192 |
| 0.0142 | 6.94 | 12300 | 0.5590 | 87.152 | 80.1378 | 86.604 | 86.6048 | 14.9178 |
| 0.0146 | 7.0 | 12400 | 0.5633 | 87.1416 | 80.1493 | 86.6109 | 86.6128 | 14.9178 |
| 0.0087 | 7.05 | 12500 | 0.5928 | 87.1881 | 80.1549 | 86.6642 | 86.6747 | 14.9133 |
| 0.0094 | 7.11 | 12600 | 0.5998 | 87.2084 | 80.2571 | 86.7023 | 86.6967 | 14.9042 |
| 0.0082 | 7.17 | 12700 | 0.6086 | 87.1567 | 80.204 | 86.6479 | 86.6462 | 14.9147 |
| 0.0096 | 7.22 | 12800 | 0.6106 | 87.173 | 80.1732 | 86.658 | 86.6586 | 14.9156 |
| 0.0084 | 7.28 | 12900 | 0.6318 | 87.1298 | 80.1264 | 86.6351 | 86.638 | 14.9174 |
| 0.0079 | 7.34 | 13000 | 0.6363 | 87.1628 | 80.1184 | 86.6548 | 86.6486 | 14.9174 |
| 0.0091 | 7.39 | 13100 | 0.6313 | 87.241 | 80.2331 | 86.7437 | 86.7435 | 14.9156 |
| 0.0088 | 7.45 | 13200 | 0.6376 | 87.1652 | 80.1422 | 86.661 | 86.6599 | 14.9142 |
| 0.0091 | 7.51 | 13300 | 0.6364 | 87.1554 | 80.1285 | 86.6576 | 86.6553 | 14.9147 |
| 0.0081 | 7.56 | 13400 | 0.6372 | 87.2418 | 80.192 | 86.7178 | 86.7199 | 14.9188 |
| 0.0103 | 7.62 | 13500 | 0.6369 | 87.1754 | 80.1347 | 86.666 | 86.666 | 14.9133 |
| 0.0094 | 7.67 | 13600 | 0.6382 | 87.1611 | 80.1066 | 86.6541 | 86.6488 | 14.9142 |
| 0.0081 | 7.73 | 13700 | 0.6371 | 87.1836 | 80.0865 | 86.6575 | 86.6538 | 14.9151 |
| 0.0076 | 7.79 | 13800 | 0.6377 | 87.1652 | 80.0572 | 86.6498 | 86.6569 | 14.9142 |
| 0.0092 | 7.84 | 13900 | 0.6354 | 87.1638 | 80.0867 | 86.6563 | 86.6536 | 14.9142 |
| 0.0076 | 7.9 | 14000 | 0.6346 | 87.1814 | 80.1212 | 86.6698 | 86.6683 | 14.9137 |
| 0.0063 | 7.96 | 14100 | 0.6373 | 87.1913 | 80.1322 | 86.6793 | 86.6765 | 14.9128 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s108_v3
|
KingKazma
| 2023-07-16T22:13:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T22:13:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mgeller/opt-6.7b-lora
|
mgeller
| 2023-07-16T22:06:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T22:58:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
nbroad/setfit-sci-wiki-large
|
nbroad
| 2023-07-16T21:58:13Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-16T21:57:15Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nbroad/setfit-sci-wiki-large
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nbroad/setfit-sci-wiki-large")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s108_v3
|
KingKazma
| 2023-07-16T21:52:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:52:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
SinanAkkoyun/orca_mini_3b_gptq_badtest
|
SinanAkkoyun
| 2023-07-16T21:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T21:27:48Z |
This is a very bad attempt at quantizing 128g 4 bit with alpaca (in orca style prompt
```sh
python quantize_alpaca.py --pretrained_model_dir orca_mini_3b/ --bits 4 --group_size 128 --quantized_model_dir orca_mini_3b_gptq/ --save_and_reloa
```
Downloqd cleaned dataset first: https://github.com/gururise/AlpacaDataCleaned
|
LarryAIDraw/roxy-08
|
LarryAIDraw
| 2023-07-16T21:46:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109272/roxy-oror-mushoku-tensei
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s108_v3
|
KingKazma
| 2023-07-16T21:45:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:45:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
LarryAIDraw/Predator
|
LarryAIDraw
| 2023-07-16T21:45:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T21:42:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/109356/predator-or-granblue-fantasy
|
quangnguyennn/pokemon-lora
|
quangnguyennn
| 2023-07-16T21:41:33Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-16T12:51:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - quangnguyennn/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s108_v3
|
KingKazma
| 2023-07-16T21:38:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T21:38:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e9_s6789_v3
|
KingKazma
| 2023-07-16T21:14:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T01:36:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Debayan990/my-pet-cat-jxl
|
Debayan990
| 2023-07-16T21:13:51Z | 13 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T21:01:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-jxl Dreambooth model trained by Debayan990 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: BBIT47
Sample pictures of this concept:



|
MichaelS91/autotrain-hub_testing-75008139803
|
MichaelS91
| 2023-07-16T21:08:49Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"text-regression",
"en",
"dataset:MichaelS91/autotrain-data-hub_testing",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T21:05:50Z |
---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- MichaelS91/autotrain-data-hub_testing
co2_eq_emissions:
emissions: 1.5911364056652006
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 75008139803
- CO2 Emissions (in grams): 1.5911
## Validation Metrics
- Loss: 1.889
- MSE: 1.889
- MAE: 1.094
- R2: 0.221
- RMSE: 1.374
- Explained Variance: 0.242
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/MichaelS91/autotrain-hub_testing-75008139803
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MichaelS91/autotrain-hub_testing-75008139803", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MichaelS91/autotrain-hub_testing-75008139803", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
hseokool/vicuna-7b-v1.3-230623-09
|
hseokool
| 2023-07-16T20:40:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T11:46:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
FightingFalcon/SonmezReyiz
|
FightingFalcon
| 2023-07-16T20:39:15Z | 0 | 0 | null |
[
"sönmez",
"sönmezreyiz",
"türkçe",
"turkish",
"tr",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-07-16T20:00:15Z |
---
license: openrail
language:
- tr
tags:
- sönmez
- sönmezreyiz
- türkçe
- turkish
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e3_s6789_v3
|
KingKazma
| 2023-07-16T20:32:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T00:01:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
rshrott/falcon-7b-instruct-ft-descriptions-adapters
|
rshrott
| 2023-07-16T20:20:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T20:15:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-16T20:18:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T23:29:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
bskang/test_demo_ver
|
bskang
| 2023-07-16T20:17:48Z | 34 | 0 |
peft
|
[
"peft",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-07-16T20:15:26Z |
---
library_name: peft
language:
- en
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
anindya64/alpaca-bank-issue-summarization-20b-EthurAI
|
anindya64
| 2023-07-16T20:00:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T20:00:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Meina/MeinaMix_V11
|
Meina
| 2023-07-16T19:53:46Z | 6,643 | 35 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"anime",
"stable diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T19:11:15Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- stable diffusion
---
MeinaMix Objective is to be able to do good art with little prompting.
For examples and prompts, please checkout: https://civitai.com/models/7240/meinamix
I have a discord server where you can post images that you generated, discuss prompt and/or ask for help.
https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates
I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3
And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models!
You may also try this model using Sinkin.ai: https://sinkin.ai/m/vln8Nwr
MeinaMix and the other of Meinas will ALWAYS be FREE.
Recommendations of use: Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Recommended parameters:
Sampler: Euler a: 40 to 60 steps.
Sampler: DPM++ SDE Karras: 20 to 30 steps.
Sampler: DPM++ 2M Karras: 20 to 40 steps.
CFG Scale: 7.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.3 up to 0.5 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality:1.4), (zombie, sketch, interlocked fingers, comic) '
|
rshrott/falcon-7b-instruct-ft-adapters
|
rshrott
| 2023-07-16T19:48:46Z | 5 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-16T13:37:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Dlychan/Tokyolagi
|
Dlychan
| 2023-07-16T19:42:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T19:41:10Z |
---
license: creativeml-openrail-m
---
|
bskang/bskang8
|
bskang
| 2023-07-16T19:39:22Z | 0 | 0 | null |
[
"en",
"license:openrail",
"region:us"
] | null | 2023-07-16T12:18:21Z |
---
language:
- en
license: openrail
---
|
Araki/airoboros-33b-gpt4-1.4.1-PI-8192-GGML
|
Araki
| 2023-07-16T19:23:42Z | 0 | 2 | null |
[
"llama",
"ggml",
"text-generation",
"region:us"
] |
text-generation
| 2023-07-16T00:08:31Z |
---
pipeline_tag: text-generation
tags:
- llama
- ggml
---
**Quantization from:**
[bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16)
**Converted to the GGML format with:**
[llama.cpp master-6e7cca4 (JUL 15, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-6e7cca4)
**Tested with:**
[koboldcpp 1.35](https://github.com/LostRuins/koboldcpp/releases/tag/v1.35)
**Example usage:**
```
koboldcpp.exe airoboros-33b-gpt4-1.4.1-PI-8192-ggmlv3.Q2_K.bin --threads 6 --linearrope --contextsize 8192 --stream --smartcontext --unbantokens --noblas
```
|
anujsahani01/NeuralCodeBot_starchat
|
anujsahani01
| 2023-07-16T19:18:12Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-15T11:21:28Z |
---
license: bigcode-openrail-m
tags:
- generated_from_trainer
model-index:
- name: NeuralCodeBot_starchat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NeuralCodeBot_starchat
This model is a fine-tuned version of [HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YojitShinde/ppo-Pyramids
|
YojitShinde
| 2023-07-16T19:13:01Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-16T19:11:49Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YojitShinde/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ailabturkiye/umitozdag
|
ailabturkiye
| 2023-07-16T19:11:50Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T18:57:55Z |
---
license: openrail
language:
- tr
tags:
- music
---
Ümit Özdağ 200 Epochs
[](discord.gg/ailab)


# Ümit Özdağ - RVC V2 200 Epoch
**Zafer Partisi Başkanı Ümit Özdağ`nın ses modelidir,
Rvc V2 200 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: Bif-Tek#0505

[](discord.gg/ailab)

|
jeremyvictor/t5-v1_1-base-fce-e8-b16
|
jeremyvictor
| 2023-07-16T18:47:47Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T15:27:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-v1_1-base-fce-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-v1_1-base-fce-e8-b16
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3409
- Rouge1: 87.1583
- Rouge2: 79.8003
- Rougel: 86.6556
- Rougelsum: 86.6858
- Gen Len: 14.8987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.9063 | 0.06 | 100 | 0.8111 | 27.4937 | 22.9629 | 27.3015 | 27.2771 | 7.4286 |
| 0.7836 | 0.11 | 200 | 0.5104 | 85.4419 | 76.9583 | 84.8358 | 84.8509 | 15.0488 |
| 0.6368 | 0.17 | 300 | 0.4682 | 86.2542 | 77.5212 | 85.6688 | 85.6923 | 14.8298 |
| 0.5924 | 0.23 | 400 | 0.4734 | 86.4845 | 78.0506 | 85.9059 | 85.9008 | 14.8352 |
| 0.5694 | 0.28 | 500 | 0.4081 | 86.352 | 78.0709 | 85.8245 | 85.8281 | 14.8585 |
| 0.5335 | 0.34 | 600 | 0.4179 | 86.5893 | 78.4175 | 86.0693 | 86.0625 | 14.8745 |
| 0.5246 | 0.4 | 700 | 0.3990 | 86.4139 | 78.4306 | 85.9523 | 85.9443 | 14.8617 |
| 0.504 | 0.45 | 800 | 0.4233 | 86.7504 | 78.7906 | 86.2416 | 86.2447 | 14.8759 |
| 0.4818 | 0.51 | 900 | 0.4008 | 86.7978 | 78.8187 | 86.2413 | 86.2432 | 14.8699 |
| 0.4756 | 0.56 | 1000 | 0.4028 | 86.9123 | 79.0247 | 86.3563 | 86.3635 | 14.8640 |
| 0.4772 | 0.62 | 1100 | 0.3789 | 86.5028 | 78.5736 | 85.9794 | 85.9983 | 14.8717 |
| 0.4638 | 0.68 | 1200 | 0.3818 | 86.6276 | 78.7383 | 86.084 | 86.0903 | 14.9124 |
| 0.4614 | 0.73 | 1300 | 0.3839 | 86.8128 | 79.2001 | 86.3591 | 86.3519 | 14.8695 |
| 0.4326 | 0.79 | 1400 | 0.3751 | 86.9302 | 79.3511 | 86.4188 | 86.4311 | 14.9019 |
| 0.4485 | 0.85 | 1500 | 0.3654 | 86.6862 | 79.0433 | 86.1832 | 86.1872 | 14.9206 |
| 0.4187 | 0.9 | 1600 | 0.3823 | 86.9451 | 79.2758 | 86.4628 | 86.4724 | 14.8795 |
| 0.4218 | 0.96 | 1700 | 0.3696 | 86.9051 | 79.1393 | 86.3682 | 86.3627 | 14.9220 |
| 0.3812 | 1.02 | 1800 | 0.3699 | 87.0233 | 79.4507 | 86.513 | 86.5154 | 14.8873 |
| 0.3116 | 1.07 | 1900 | 0.3763 | 86.9293 | 79.2058 | 86.4356 | 86.4445 | 14.8918 |
| 0.3237 | 1.13 | 2000 | 0.3740 | 87.0449 | 79.4088 | 86.5157 | 86.5319 | 14.8918 |
| 0.3071 | 1.19 | 2100 | 0.3690 | 86.5698 | 78.4408 | 85.9993 | 86.0409 | 14.9069 |
| 0.3072 | 1.24 | 2200 | 0.3646 | 86.9336 | 79.334 | 86.4284 | 86.4303 | 14.8918 |
| 0.2953 | 1.3 | 2300 | 0.3750 | 86.7437 | 78.949 | 86.2131 | 86.202 | 14.8909 |
| 0.308 | 1.35 | 2400 | 0.3613 | 86.792 | 79.2179 | 86.2832 | 86.2934 | 14.8923 |
| 0.3132 | 1.41 | 2500 | 0.3528 | 86.7653 | 79.0525 | 86.2258 | 86.2357 | 14.9110 |
| 0.3141 | 1.47 | 2600 | 0.3494 | 86.8884 | 79.2484 | 86.3719 | 86.3622 | 14.9069 |
| 0.3095 | 1.52 | 2700 | 0.3539 | 87.0166 | 79.5218 | 86.5167 | 86.5248 | 14.8905 |
| 0.3274 | 1.58 | 2800 | 0.3599 | 87.2104 | 79.7277 | 86.7135 | 86.7127 | 14.8854 |
| 0.312 | 1.64 | 2900 | 0.3536 | 86.8926 | 79.2971 | 86.3699 | 86.3666 | 14.8886 |
| 0.3134 | 1.69 | 3000 | 0.3518 | 87.0884 | 79.5848 | 86.5877 | 86.6005 | 14.9028 |
| 0.3012 | 1.75 | 3100 | 0.3573 | 86.3559 | 78.1413 | 85.8416 | 85.8479 | 14.8763 |
| 0.311 | 1.81 | 3200 | 0.3467 | 86.9837 | 79.4983 | 86.4827 | 86.4981 | 14.8937 |
| 0.303 | 1.86 | 3300 | 0.3422 | 86.9232 | 79.3542 | 86.4098 | 86.4427 | 14.9032 |
| 0.304 | 1.92 | 3400 | 0.3409 | 87.1583 | 79.8003 | 86.6556 | 86.6858 | 14.8987 |
| 0.2934 | 1.98 | 3500 | 0.3485 | 87.0529 | 79.6491 | 86.5825 | 86.6003 | 14.9000 |
| 0.247 | 2.03 | 3600 | 0.3586 | 87.0147 | 79.6418 | 86.5126 | 86.5339 | 14.9042 |
| 0.193 | 2.09 | 3700 | 0.3667 | 86.9326 | 79.4481 | 86.4675 | 86.4709 | 14.9128 |
| 0.195 | 2.14 | 3800 | 0.3673 | 86.8892 | 79.3638 | 86.3717 | 86.3866 | 14.9210 |
| 0.19 | 2.2 | 3900 | 0.3670 | 86.8789 | 79.4677 | 86.3925 | 86.3892 | 14.9023 |
| 0.2033 | 2.26 | 4000 | 0.3600 | 86.9004 | 79.5211 | 86.4043 | 86.407 | 14.9042 |
| 0.1969 | 2.31 | 4100 | 0.3587 | 87.0403 | 79.7208 | 86.5257 | 86.5245 | 14.8978 |
| 0.2035 | 2.37 | 4200 | 0.3630 | 86.8793 | 79.4667 | 86.3931 | 86.3875 | 14.8895 |
| 0.2162 | 2.43 | 4300 | 0.3722 | 86.78 | 79.3367 | 86.2742 | 86.2812 | 14.9083 |
| 0.1984 | 2.48 | 4400 | 0.3573 | 86.7248 | 79.2577 | 86.218 | 86.2139 | 14.8918 |
| 0.2058 | 2.54 | 4500 | 0.3617 | 86.6452 | 79.1422 | 86.1701 | 86.1838 | 14.8909 |
| 0.2161 | 2.6 | 4600 | 0.3554 | 86.8574 | 79.5476 | 86.3982 | 86.4095 | 14.9283 |
| 0.215 | 2.65 | 4700 | 0.3583 | 86.8873 | 79.5265 | 86.4039 | 86.3996 | 14.8923 |
| 0.2048 | 2.71 | 4800 | 0.3535 | 86.8465 | 79.3852 | 86.3446 | 86.344 | 14.8978 |
| 0.2099 | 2.77 | 4900 | 0.3601 | 86.8952 | 79.4424 | 86.3888 | 86.387 | 14.8868 |
| 0.2149 | 2.82 | 5000 | 0.3603 | 86.7871 | 79.2397 | 86.297 | 86.3004 | 14.8850 |
| 0.2251 | 2.88 | 5100 | 0.3448 | 86.9477 | 79.6744 | 86.4984 | 86.4911 | 14.9133 |
| 0.2048 | 2.93 | 5200 | 0.3522 | 86.8843 | 79.37 | 86.3702 | 86.3668 | 14.8955 |
| 0.2099 | 2.99 | 5300 | 0.3459 | 86.7938 | 79.2104 | 86.3027 | 86.3169 | 14.9137 |
| 0.1377 | 3.05 | 5400 | 0.4000 | 86.9855 | 79.4184 | 86.438 | 86.4375 | 14.9110 |
| 0.1369 | 3.1 | 5500 | 0.3848 | 86.8338 | 79.2098 | 86.2885 | 86.3028 | 14.9019 |
| 0.1357 | 3.16 | 5600 | 0.3914 | 86.7061 | 79.2474 | 86.2247 | 86.2237 | 14.9105 |
| 0.1263 | 3.22 | 5700 | 0.3864 | 86.7128 | 79.1103 | 86.2121 | 86.2166 | 14.9137 |
| 0.135 | 3.27 | 5800 | 0.3929 | 86.8134 | 79.4572 | 86.3608 | 86.3683 | 14.9124 |
| 0.1361 | 3.33 | 5900 | 0.3828 | 86.9149 | 79.4756 | 86.4152 | 86.3959 | 14.8959 |
| 0.1286 | 3.39 | 6000 | 0.3849 | 86.8025 | 79.3645 | 86.3215 | 86.3204 | 14.8996 |
| 0.1335 | 3.44 | 6100 | 0.3793 | 86.7591 | 79.2887 | 86.2778 | 86.2765 | 14.9105 |
| 0.1278 | 3.5 | 6200 | 0.3938 | 86.8352 | 79.4161 | 86.3282 | 86.3376 | 14.9169 |
| 0.1346 | 3.56 | 6300 | 0.3943 | 86.9637 | 79.6404 | 86.4753 | 86.4718 | 14.8978 |
| 0.1421 | 3.61 | 6400 | 0.3799 | 86.8445 | 79.4133 | 86.3271 | 86.3206 | 14.9151 |
| 0.1398 | 3.67 | 6500 | 0.3923 | 86.9793 | 79.6847 | 86.4935 | 86.4889 | 14.9174 |
| 0.1359 | 3.72 | 6600 | 0.3912 | 86.9095 | 79.3593 | 86.4296 | 86.4506 | 14.8959 |
| 0.1444 | 3.78 | 6700 | 0.3741 | 86.8498 | 79.3141 | 86.3586 | 86.3681 | 14.8909 |
| 0.1351 | 3.84 | 6800 | 0.3840 | 87.223 | 79.825 | 86.7127 | 86.7371 | 14.8877 |
| 0.1325 | 3.89 | 6900 | 0.3816 | 87.148 | 79.8102 | 86.6405 | 86.6511 | 14.9133 |
| 0.1315 | 3.95 | 7000 | 0.3796 | 86.7778 | 79.3782 | 86.3057 | 86.2939 | 14.9005 |
| 0.1332 | 4.01 | 7100 | 0.3962 | 87.0238 | 79.6621 | 86.5384 | 86.5306 | 14.8996 |
| 0.0834 | 4.06 | 7200 | 0.4271 | 86.9999 | 79.7076 | 86.4981 | 86.5026 | 14.9014 |
| 0.088 | 4.12 | 7300 | 0.4176 | 86.9193 | 79.4698 | 86.4085 | 86.4171 | 14.9128 |
| 0.0897 | 4.18 | 7400 | 0.4109 | 86.9287 | 79.5866 | 86.4541 | 86.4474 | 14.9037 |
| 0.0908 | 4.23 | 7500 | 0.4109 | 87.1272 | 79.7632 | 86.6206 | 86.6176 | 14.9133 |
| 0.0895 | 4.29 | 7600 | 0.4114 | 87.0107 | 79.7349 | 86.4873 | 86.4754 | 14.9023 |
| 0.0856 | 4.35 | 7700 | 0.4242 | 87.0115 | 79.6387 | 86.4786 | 86.49 | 14.8982 |
| 0.0852 | 4.4 | 7800 | 0.4271 | 86.9943 | 79.6717 | 86.5126 | 86.5026 | 14.9019 |
| 0.0919 | 4.46 | 7900 | 0.4216 | 86.9903 | 79.67 | 86.512 | 86.5085 | 14.8937 |
| 0.0907 | 4.51 | 8000 | 0.4180 | 87.0323 | 79.7092 | 86.5391 | 86.5343 | 14.8978 |
| 0.0889 | 4.57 | 8100 | 0.4276 | 86.9813 | 79.6367 | 86.4697 | 86.4724 | 14.9115 |
| 0.0907 | 4.63 | 8200 | 0.4209 | 87.0149 | 79.5637 | 86.5028 | 86.5059 | 14.9092 |
| 0.0966 | 4.68 | 8300 | 0.4064 | 86.9685 | 79.4665 | 86.4393 | 86.4523 | 14.9010 |
| 0.088 | 4.74 | 8400 | 0.4234 | 86.9921 | 79.5729 | 86.4977 | 86.5067 | 14.8800 |
| 0.0897 | 4.8 | 8500 | 0.4117 | 87.0727 | 79.7094 | 86.5465 | 86.5482 | 14.9014 |
| 0.0924 | 4.85 | 8600 | 0.4056 | 86.8789 | 79.409 | 86.3689 | 86.3672 | 14.9083 |
| 0.0916 | 4.91 | 8700 | 0.4127 | 86.8645 | 79.4195 | 86.3814 | 86.3729 | 14.8982 |
| 0.0908 | 4.97 | 8800 | 0.4054 | 86.9146 | 79.4138 | 86.4022 | 86.399 | 14.9000 |
| 0.078 | 5.02 | 8900 | 0.4403 | 87.0178 | 79.6166 | 86.5112 | 86.505 | 14.9078 |
| 0.0583 | 5.08 | 9000 | 0.4400 | 86.9828 | 79.649 | 86.4913 | 86.4962 | 14.9064 |
| 0.057 | 5.14 | 9100 | 0.4637 | 87.0435 | 79.6446 | 86.5464 | 86.5252 | 14.9037 |
| 0.0581 | 5.19 | 9200 | 0.4617 | 87.017 | 79.6255 | 86.5004 | 86.4907 | 14.9069 |
| 0.0562 | 5.25 | 9300 | 0.4521 | 86.8638 | 79.479 | 86.3298 | 86.338 | 14.9096 |
| 0.0588 | 5.3 | 9400 | 0.4472 | 86.9719 | 79.5608 | 86.4751 | 86.4798 | 14.9073 |
| 0.0571 | 5.36 | 9500 | 0.4472 | 87.0325 | 79.6355 | 86.5154 | 86.5278 | 14.9073 |
| 0.0589 | 5.42 | 9600 | 0.4580 | 87.1556 | 79.8992 | 86.627 | 86.6372 | 14.9064 |
| 0.057 | 5.47 | 9700 | 0.4527 | 87.0033 | 79.6457 | 86.4846 | 86.5031 | 14.9101 |
| 0.0595 | 5.53 | 9800 | 0.4538 | 87.0419 | 79.6632 | 86.5261 | 86.5434 | 14.9055 |
| 0.062 | 5.59 | 9900 | 0.4518 | 87.0581 | 79.6818 | 86.54 | 86.551 | 14.9005 |
| 0.0568 | 5.64 | 10000 | 0.4549 | 87.1255 | 79.8908 | 86.6143 | 86.6255 | 14.9042 |
| 0.0572 | 5.7 | 10100 | 0.4557 | 86.9927 | 79.5946 | 86.4726 | 86.4953 | 14.9023 |
| 0.0603 | 5.76 | 10200 | 0.4493 | 87.0665 | 79.7469 | 86.58 | 86.5934 | 14.8932 |
| 0.0604 | 5.81 | 10300 | 0.4533 | 87.0864 | 79.7039 | 86.5871 | 86.5851 | 14.9042 |
| 0.0564 | 5.87 | 10400 | 0.4653 | 87.082 | 79.766 | 86.5835 | 86.5775 | 14.9055 |
| 0.0579 | 5.93 | 10500 | 0.4677 | 86.9805 | 79.5068 | 86.4708 | 86.4744 | 14.8882 |
| 0.0582 | 5.98 | 10600 | 0.4607 | 86.9273 | 79.3762 | 86.4228 | 86.4225 | 14.9119 |
| 0.0454 | 6.04 | 10700 | 0.4917 | 87.038 | 79.6146 | 86.5363 | 86.533 | 14.9156 |
| 0.0399 | 6.09 | 10800 | 0.4986 | 87.0026 | 79.5481 | 86.4992 | 86.4924 | 14.9042 |
| 0.0367 | 6.15 | 10900 | 0.5115 | 87.13 | 79.7506 | 86.6082 | 86.621 | 14.9055 |
| 0.0405 | 6.21 | 11000 | 0.5084 | 87.0768 | 79.6986 | 86.5541 | 86.5403 | 14.9083 |
| 0.0386 | 6.26 | 11100 | 0.5092 | 87.1376 | 79.7442 | 86.5937 | 86.5767 | 14.8996 |
| 0.0382 | 6.32 | 11200 | 0.5063 | 87.0779 | 79.7205 | 86.561 | 86.5546 | 14.8982 |
| 0.0431 | 6.38 | 11300 | 0.4950 | 87.0998 | 79.7699 | 86.5882 | 86.5916 | 14.9028 |
| 0.0388 | 6.43 | 11400 | 0.5098 | 87.1711 | 79.8707 | 86.6425 | 86.6409 | 14.9023 |
| 0.041 | 6.49 | 11500 | 0.4911 | 87.1742 | 79.8319 | 86.6434 | 86.6522 | 14.9005 |
| 0.0379 | 6.55 | 11600 | 0.5023 | 87.2258 | 79.9175 | 86.7019 | 86.7018 | 14.9010 |
| 0.0383 | 6.6 | 11700 | 0.5078 | 87.0913 | 79.7547 | 86.5767 | 86.5826 | 14.9046 |
| 0.0387 | 6.66 | 11800 | 0.5111 | 87.1913 | 79.9592 | 86.6805 | 86.6742 | 14.9060 |
| 0.0362 | 6.72 | 11900 | 0.5125 | 87.0096 | 79.6639 | 86.5037 | 86.5039 | 14.9124 |
| 0.0343 | 6.77 | 12000 | 0.5210 | 87.0657 | 79.7384 | 86.5621 | 86.5561 | 14.9110 |
| 0.0401 | 6.83 | 12100 | 0.5110 | 87.1338 | 79.8537 | 86.6368 | 86.6271 | 14.9124 |
| 0.0353 | 6.88 | 12200 | 0.5169 | 87.082 | 79.756 | 86.5771 | 86.5718 | 14.9073 |
| 0.0384 | 6.94 | 12300 | 0.4998 | 87.1211 | 79.8474 | 86.6016 | 86.6065 | 14.9078 |
| 0.0395 | 7.0 | 12400 | 0.5184 | 87.1621 | 79.8793 | 86.6411 | 86.648 | 14.9064 |
| 0.0243 | 7.05 | 12500 | 0.5387 | 87.1588 | 79.8545 | 86.6464 | 86.6627 | 14.9019 |
| 0.0283 | 7.11 | 12600 | 0.5384 | 87.1909 | 79.8888 | 86.6567 | 86.6698 | 14.9042 |
| 0.026 | 7.17 | 12700 | 0.5459 | 87.1782 | 79.7991 | 86.6373 | 86.6507 | 14.9028 |
| 0.0303 | 7.22 | 12800 | 0.5301 | 87.1014 | 79.7321 | 86.5581 | 86.5743 | 14.9014 |
| 0.0252 | 7.28 | 12900 | 0.5481 | 87.0907 | 79.6948 | 86.5306 | 86.5474 | 14.9069 |
| 0.0273 | 7.34 | 13000 | 0.5469 | 87.0971 | 79.6697 | 86.5392 | 86.558 | 14.8987 |
| 0.0249 | 7.39 | 13100 | 0.5462 | 87.095 | 79.6904 | 86.5559 | 86.566 | 14.9037 |
| 0.0246 | 7.45 | 13200 | 0.5553 | 87.0964 | 79.6834 | 86.5572 | 86.5607 | 14.9055 |
| 0.0286 | 7.51 | 13300 | 0.5501 | 87.0933 | 79.7177 | 86.5579 | 86.5582 | 14.9092 |
| 0.0234 | 7.56 | 13400 | 0.5550 | 87.1266 | 79.7546 | 86.5833 | 86.5855 | 14.9087 |
| 0.0263 | 7.62 | 13500 | 0.5570 | 87.0957 | 79.6859 | 86.5608 | 86.5584 | 14.9064 |
| 0.0238 | 7.67 | 13600 | 0.5630 | 87.1368 | 79.7487 | 86.6036 | 86.6031 | 14.9032 |
| 0.0258 | 7.73 | 13700 | 0.5598 | 87.1527 | 79.7481 | 86.622 | 86.6153 | 14.9055 |
| 0.0249 | 7.79 | 13800 | 0.5649 | 87.15 | 79.7419 | 86.6106 | 86.6056 | 14.9046 |
| 0.0272 | 7.84 | 13900 | 0.5616 | 87.1439 | 79.7597 | 86.6085 | 86.6081 | 14.9042 |
| 0.0261 | 7.9 | 14000 | 0.5596 | 87.1359 | 79.7696 | 86.6081 | 86.6024 | 14.9051 |
| 0.0233 | 7.96 | 14100 | 0.5611 | 87.1367 | 79.7636 | 86.6112 | 86.6019 | 14.9046 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
|
oakal/fourthbrain_bloomz_marketing
|
oakal
| 2023-07-16T18:32:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T18:32:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
harithapliyal/distilbert-base-uncased-finetuned-ner
|
harithapliyal
| 2023-07-16T18:26:04Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-16T17:06:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: harithapliyal/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# harithapliyal/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1975
- Validation Loss: 0.0734
- Train Precision: 0.9049
- Train Recall: 0.9116
- Train F1: 0.9083
- Train Accuracy: 0.9793
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1975 | 0.0734 | 0.9049 | 0.9116 | 0.9083 | 0.9793 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
0sunfire0/rl_course_vizdoom_health_gathering_supreme_02
|
0sunfire0
| 2023-07-16T18:23:44Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T18:23:37Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.16 +/- 3.86
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r 0sunfire0/rl_course_vizdoom_health_gathering_supreme_02
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_02
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_02 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bodaay/Wizard-Vicuna-7B-Uncensored-ONNX
|
bodaay
| 2023-07-16T18:06:51Z | 5 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"uncensored",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T16:18:44Z |
---
license: other
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
tags:
- uncensored
---
Original Model: [ehartford/Wizard-Vicuna-7B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored)
From Original Model Card:
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|
rsml/bbert_qa
|
rsml
| 2023-07-16T17:59:30Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-16T17:42:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bbert_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bbert_qa
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3490 |
| 2.7154 | 2.0 | 500 | 1.7686 |
| 2.7154 | 3.0 | 750 | 1.6818 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sherif1311/flan-t5-base-imdb-text-classification
|
sherif1311
| 2023-07-16T17:50:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T14:44:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0797
- F1: 95.072
- Gen Len: 2.5005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
NasimB/children_bnc_rarity_all_no_cut
|
NasimB
| 2023-07-16T17:50:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T15:57:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children_bnc_rarity_all_no_cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children_bnc_rarity_all_no_cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7047 | 0.29 | 500 | 5.6398 |
| 5.3501 | 0.58 | 1000 | 5.2066 |
| 5.0056 | 0.88 | 1500 | 4.9588 |
| 4.7258 | 1.17 | 2000 | 4.8173 |
| 4.5734 | 1.46 | 2500 | 4.6948 |
| 4.4663 | 1.75 | 3000 | 4.5804 |
| 4.3402 | 2.05 | 3500 | 4.5071 |
| 4.1471 | 2.34 | 4000 | 4.4576 |
| 4.1137 | 2.63 | 4500 | 4.4027 |
| 4.0777 | 2.92 | 5000 | 4.3468 |
| 3.8629 | 3.22 | 5500 | 4.3449 |
| 3.8078 | 3.51 | 6000 | 4.3108 |
| 3.8044 | 3.8 | 6500 | 4.2763 |
| 3.7029 | 4.09 | 7000 | 4.2803 |
| 3.5324 | 4.39 | 7500 | 4.2741 |
| 3.5239 | 4.68 | 8000 | 4.2585 |
| 3.5091 | 4.97 | 8500 | 4.2454 |
| 3.3521 | 5.26 | 9000 | 4.2592 |
| 3.3357 | 5.56 | 9500 | 4.2584 |
| 3.3348 | 5.85 | 10000 | 4.2573 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nishchalprasad/lunar_lander_v2-PPO
|
nishchalprasad
| 2023-07-16T17:44:18Z | 4 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T17:43:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.46 +/- 24.94
name: mean_reward
verified: false
---
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kanu03/my-cat
|
kanu03
| 2023-07-16T17:44:02Z | 107 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-16T17:39:19Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-cat Dreambooth model trained by kanu03 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU101
Sample pictures of this concept:

|
Za88yes/Afriana
|
Za88yes
| 2023-07-16T17:43:07Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-16T17:41:00Z |
---
license: bigscience-openrail-m
---
|
Tasaloris13/finetuned-college-10
|
Tasaloris13
| 2023-07-16T17:42:10Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T16:59:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
balpreetspankaj/distilbert-base-uncased-finetuned-emotion
|
balpreetspankaj
| 2023-07-16T17:37:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T16:46:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9285
- F1: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.827 | 1.0 | 250 | 0.3132 | 0.9085 | 0.9062 |
| 0.2411 | 2.0 | 500 | 0.2169 | 0.9285 | 0.9283 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
magicsword/wy-mt-en-zh-1
|
magicsword
| 2023-07-16T17:35:29Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:magicsword/autotrain-data-wy-mt-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-16T15:16:22Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- magicsword/autotrain-data-wy-mt-en-zh
co2_eq_emissions:
emissions: 1.4514851624864995
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74981139791
- CO2 Emissions (in grams): 1.4515
## Validation Metrics
- Loss: 2.215
- SacreBLEU: 12.702
- Gen len: 16.311
|
magicsword/wy-mt-en-zh-2
|
magicsword
| 2023-07-16T17:27:39Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:magicsword/autotrain-data-wy-mt-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-16T15:15:50Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- magicsword/autotrain-data-wy-mt-en-zh
co2_eq_emissions:
emissions: 71.14399741050826
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74981139786
- CO2 Emissions (in grams): 71.1440
## Validation Metrics
- Loss: 2.220
- SacreBLEU: 12.949
- Gen len: 16.386
|
lucasbertola/ppo-Pyramids
|
lucasbertola
| 2023-07-16T17:26:30Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-16T17:26:24Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lucasbertola/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JianPublisher/modeltest
|
JianPublisher
| 2023-07-16T17:25:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-16T17:20:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: modeltest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modeltest
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jayantdocplix/falcon_model_finetuned
|
jayantdocplix
| 2023-07-16T17:25:44Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T19:29:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
odunola/transcriber-t5-v8-new
|
odunola
| 2023-07-16T17:23:29Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T16:37:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: transcriber-t5-v8-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transcriber-t5-v8-new
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1008 | 0.72 | 500 | 0.1306 |
| 0.069 | 1.43 | 1000 | 0.1227 |
| 0.1052 | 2.15 | 1500 | 0.1209 |
| 0.1017 | 2.86 | 2000 | 0.0992 |
| 0.0828 | 3.58 | 2500 | 0.0919 |
| 0.0471 | 4.29 | 3000 | 0.0927 |
| 0.0769 | 5.01 | 3500 | 0.0849 |
| 0.0732 | 5.72 | 4000 | 0.0862 |
| 0.0801 | 6.44 | 4500 | 0.0857 |
| 0.0428 | 7.15 | 5000 | 0.0815 |
| 0.1119 | 7.87 | 5500 | 0.0790 |
| 0.0692 | 8.58 | 6000 | 0.0780 |
| 0.0684 | 9.3 | 6500 | 0.0818 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/sda
|
ailabturkiye
| 2023-07-16T17:19:15Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T17:00:01Z |
---
license: openrail
language:
- tr
tags:
- music
---
|
DanGalt/speecht5_finetuned_voxpopuli_fi
|
DanGalt
| 2023-07-16T17:11:18Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"fi",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-16T17:07:04Z |
---
language:
- fi
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_fi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_fi
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 150
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.504 | 5.05 | 250 | 0.4645 |
| 0.4882 | 10.1 | 500 | 0.4499 |
| 0.467 | 15.15 | 750 | 0.4450 |
| 0.4651 | 20.2 | 1000 | 0.4436 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_prompt_tuning_500_10_3000_8_e-1_s55555_v3_manual
|
KingKazma
| 2023-07-16T17:02:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T17:02:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/azizyildirim
|
ailabturkiye
| 2023-07-16T16:47:56Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:37:16Z |
---
license: openrail
language:
- tr
tags:
- music
---
|
iworeushankaonce/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
iworeushankaonce
| 2023-07-16T16:35:53Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-16T15:19:49Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3882
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4932 | 1.0 | 112 | 0.5325 | 0.86 |
| 0.3541 | 2.0 | 225 | 0.6068 | 0.77 |
| 0.5743 | 3.0 | 337 | 0.6356 | 0.83 |
| 0.6256 | 4.0 | 450 | 0.4878 | 0.86 |
| 0.0619 | 5.0 | 562 | 0.4262 | 0.88 |
| 0.0044 | 6.0 | 675 | 0.3266 | 0.91 |
| 0.0018 | 7.0 | 787 | 0.4827 | 0.87 |
| 0.001 | 8.0 | 900 | 0.9245 | 0.82 |
| 0.1854 | 9.0 | 1012 | 0.4256 | 0.89 |
| 0.0001 | 10.0 | 1125 | 0.3898 | 0.9 |
| 0.0001 | 11.0 | 1237 | 0.3873 | 0.9 |
| 0.0001 | 12.0 | 1350 | 0.4064 | 0.91 |
| 0.0 | 13.0 | 1462 | 0.3910 | 0.9 |
| 0.0 | 14.0 | 1575 | 0.3924 | 0.9 |
| 0.0001 | 15.0 | 1687 | 0.3917 | 0.91 |
| 0.0 | 16.0 | 1800 | 0.3903 | 0.9 |
| 0.0 | 17.0 | 1912 | 0.3900 | 0.89 |
| 0.0 | 18.0 | 2025 | 0.3894 | 0.89 |
| 0.0 | 19.0 | 2137 | 0.3886 | 0.9 |
| 0.0 | 19.91 | 2240 | 0.3882 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
WasuratS/whisper-tiny-en-finetune-minds14
|
WasuratS
| 2023-07-16T16:33:30Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-16T13:49:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-finetune-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3382526564344746
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-finetune-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6541
- Wer Ortho: 0.3399
- Wer: 0.3383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.3136 | 3.57 | 100 | 0.4883 | 0.3640 | 0.3524 |
| 0.0417 | 7.14 | 200 | 0.5146 | 0.3560 | 0.3442 |
| 0.0066 | 10.71 | 300 | 0.5736 | 0.3411 | 0.3353 |
| 0.0017 | 14.29 | 400 | 0.6040 | 0.3455 | 0.3418 |
| 0.0013 | 17.86 | 500 | 0.6226 | 0.3393 | 0.3365 |
| 0.0009 | 21.43 | 600 | 0.6352 | 0.3393 | 0.3365 |
| 0.0007 | 25.0 | 700 | 0.6436 | 0.3399 | 0.3371 |
| 0.0006 | 28.57 | 800 | 0.6492 | 0.3399 | 0.3383 |
| 0.0006 | 32.14 | 900 | 0.6530 | 0.3399 | 0.3383 |
| 0.0006 | 35.71 | 1000 | 0.6541 | 0.3399 | 0.3383 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cassandraqs/shan_homework1
|
cassandraqs
| 2023-07-16T16:29:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T16:29:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_t5-small_prompt_tuning_500_10_3000_8_e-1_s6789_v3_manual
|
KingKazma
| 2023-07-16T16:23:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T16:23:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
localmodels/LLaMA-65B-ggml
|
localmodels
| 2023-07-16T16:22:41Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-16T16:22:41Z |
---
duplicated_from: localmodels/LLM
---
# LLaMA 65B ggml
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
---
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
---
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB| 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB| 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB| 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB| 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB| 39.23 GB | Original quant method, 4-bit. |
| llama-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB| 43.31 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB| 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB| 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB| 47.39 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB| 51.47 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB| 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB| 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-65b.ggmlv3.q6_K.bin | q6_K |6 | 53.56 GB| 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
ailabturkiye/13killoki
|
ailabturkiye
| 2023-07-16T16:19:28Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:09:13Z |
---
license: openrail
language:
- tr
tags:
- music
---
13Killoki'nin StereoBound Song Story videosuyla yaptığım model. Konuşma için uygundur.
|
ailabturkiye/Joker
|
ailabturkiye
| 2023-07-16T16:17:15Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-16T15:22:06Z |
---
license: openrail
---
[](discord.gg/ailab)


# Joker - RVC V2 300 Epoch
**Rapper Joker`in ses modelidir,
Rvc V2 300 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: barisdark0
- YouTube: Barış (https://www.youtube.com/@barisdark)

[](discord.gg/ailab)
---
{}
---
|
ailabturkiye/KadirMisiroglu
|
ailabturkiye
| 2023-07-16T16:17:02Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:13:31Z |
---
license: openrail
language:
- tr
tags:
- music
---
Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
casque/Ultimate_ahegao
|
casque
| 2023-07-16T16:16:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:14:24Z |
---
license: creativeml-openrail-m
---
|
ailabturkiye/NormEnder
|
ailabturkiye
| 2023-07-16T16:14:49Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-16T15:49:59Z |
---
license: openrail
---
[](discord.gg/ailab)


# Ceza - RVC V2 500 Epoch
**Rapper Ceza`nın ses modelidir,
Rvc V2 500 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: barisdark0
- YouTube: Barış (https://www.youtube.com/@barisdark)

[](discord.gg/ailab)
---
{}
---
|
ailabturkiye/Beta
|
ailabturkiye
| 2023-07-16T16:13:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T16:04:26Z |
[](discord.gg/ailab)


# Beta Berk Bayındır (3B) - RVC V2 500 Epoch
**Beta Berk Bayındır'ın ses medolidir,
Rvc V2 500 epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: efemekkuin
- YouTube: Ahmet Efe (https://www.youtube.com/channel/UCw40vAQRF8551rMWem6CaMg)

[](discord.gg/ailab)

|
ailabturkiye/AliErbas
|
ailabturkiye
| 2023-07-16T16:11:53Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:09:33Z |
---
license: openrail
language:
- tr
tags:
- music
---
Diyanet İşleri Başkanı Sayın Ali Erbaş. Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
casque/AfterSexMS
|
casque
| 2023-07-16T16:09:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:07:19Z |
---
license: creativeml-openrail-m
---
|
n0n1m/rvc-krosh
|
n0n1m
| 2023-07-16T16:08:15Z | 0 | 0 | null |
[
"audio-to-audio",
"license:openrail",
"region:us"
] |
audio-to-audio
| 2023-07-15T17:45:37Z |
---
license: openrail
pipeline_tag: audio-to-audio
---
Just a model of Krash from Kikoriki/Gogoriki or Krosh from Smeshariki
|
tyavika/Bert-QA-Pytorch-FULL
|
tyavika
| 2023-07-16T16:05:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-28T02:19:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Bert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-QA-Pytorch-FULL
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1633 | 1.0 | 3290 | 1.0515 |
| 0.8061 | 2.0 | 6580 | 1.0593 |
| 0.533 | 3.0 | 9870 | 1.2154 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/NecmettinErbakan
|
ailabturkiye
| 2023-07-16T16:05:52Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:02:15Z |
---
license: openrail
language:
- tr
tags:
- music
---
Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
casque/Creampie_v11
|
casque
| 2023-07-16T16:05:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:03:25Z |
---
license: creativeml-openrail-m
---
|
ailabturkiye/deepturkisherdi
|
ailabturkiye
| 2023-07-16T16:05:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T16:04:08Z |
---
license: openrail
language:
- tr
tags:
- music
deepturkisherdi 500 epoch
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.