modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 18:27:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 18:26:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nbogdan/flant5-large-0ex-paraphrasing-1epochs
|
nbogdan
| 2023-09-05T02:59:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T02:58:14Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-0ex-paraphrasing-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-0ex-paraphrasing-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
team-lucid/hubert-base-korean
|
team-lucid
| 2023-09-05T02:55:16Z | 441 | 25 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-29T12:00:30Z |
---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-base-korean
## Model Details
Hubert(Hidden-Unit BERT)๋ Facebook์์ ์ ์ํ Speech Representation Learning ๋ชจ๋ธ์
๋๋ค.
Hubert๋ ๊ธฐ์กด์ ์์ฑ ์ธ์ ๋ชจ๋ธ๊ณผ ๋ฌ๋ฆฌ, ์์ฑ ์ ํธ๋ฅผ raw waveform์์ ๋ฐ๋ก ํ์ตํ๋ self-supervised learning ๋ฐฉ์์ ์ฌ์ฉํฉ๋๋ค.
์ด ์ฐ๊ตฌ๋ ๊ตฌ๊ธ์ TPU Research Cloud(TRC)๋ฅผ ํตํด ์ง์๋ฐ์ Cloud TPU๋ก ํ์ต๋์์ต๋๋ค.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-base-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-base-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
ํด๋น ๋ชจ๋ธ์ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ์ฌ์์ผ๋ก ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ง์์ ๋ฐ์
๊ตฌ์ถ๋ [์์ ๋ํ ์์ฑ(์ผ๋ฐ๋จ์ฌ)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [๋คํ์ ์์ฑํฉ์ฑ ๋ฐ์ดํฐ](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [๋ฐฉ์ก ์ฝํ
์ธ ๋ํ์ฒด ์์ฑ์ธ์ ๋ฐ์ดํฐ](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
์์ ์ฝ 4,000์๊ฐ์ ์ถ์ถํด ํ์ต๋์์ต๋๋ค.
### Training Procedure
[์ ๋
ผ๋ฌธ](https://arxiv.org/pdf/2106.07447.pdf)๊ณผ ๋์ผํ๊ฒ MFCC ๊ธฐ๋ฐ์ผ๋ก Base ๋ชจ๋ธ์ ํ์ตํ ๋ค์, 500 cluster๋ก k-means๋ฅผ ์ํํด ๋ค์ Base์
Large ๋ชจ๋ธ์ ํ์ตํ์ต๋๋ค.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 |
|
Lilth05/ppo-LunarLander-v2
|
Lilth05
| 2023-09-05T02:44:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T02:44:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.09 +/- 20.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dmatekenya/wav2vec2-large-xls-r-1b-chichewa
|
dmatekenya
| 2023-09-05T02:41:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-1b",
"base_model:finetune:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:01:56Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-1b
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-1b-chichewa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-chichewa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2567 | 3.51 | 400 | inf | 0.9449 |
| 1.476 | 7.02 | 800 | inf | 0.8481 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
alex1qaz/bert-finetuned-goodsmemo-ner
|
alex1qaz
| 2023-09-05T02:30:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:goodsmemo",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-05T02:16:14Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- goodsmemo
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-goodsmemo-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: goodsmemo
type: goodsmemo
config: googdsmemo
split: validation
args: googdsmemo
metrics:
- name: Precision
type: precision
value: 0.14545454545454545
- name: Recall
type: recall
value: 0.14953271028037382
- name: F1
type: f1
value: 0.14746543778801846
- name: Accuracy
type: accuracy
value: 0.9293815536058206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-goodsmemo-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the goodsmemo dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1899
- Precision: 0.1455
- Recall: 0.1495
- F1: 0.1475
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 46 | 0.3317 | 0.0 | 0.0 | 0.0 | 0.9018 |
| No log | 2.0 | 92 | 0.3051 | 0.0090 | 0.0280 | 0.0137 | 0.8640 |
| No log | 3.0 | 138 | 0.2561 | 0.0207 | 0.0467 | 0.0287 | 0.8966 |
| No log | 4.0 | 184 | 0.2345 | 0.0383 | 0.0748 | 0.0506 | 0.9118 |
| No log | 5.0 | 230 | 0.2319 | 0.0491 | 0.1028 | 0.0665 | 0.9018 |
| No log | 6.0 | 276 | 0.2108 | 0.1085 | 0.1308 | 0.1186 | 0.9245 |
| No log | 7.0 | 322 | 0.2042 | 0.1181 | 0.1402 | 0.1282 | 0.9268 |
| No log | 8.0 | 368 | 0.2077 | 0.1262 | 0.1215 | 0.1238 | 0.9263 |
| No log | 9.0 | 414 | 0.1951 | 0.1524 | 0.1495 | 0.1509 | 0.9297 |
| No log | 10.0 | 460 | 0.1899 | 0.1455 | 0.1495 | 0.1475 | 0.9294 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
FriedGil/distillBERT-misinformation-classifier
|
FriedGil
| 2023-09-05T02:25:43Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T01:33:54Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distillBERT-misinformation-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillBERT-misinformation-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the Kaggle Fake News dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- Accuracy: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1411 | 1.0 | 800 | 0.0104 | 0.9974 |
| 0.0101 | 2.0 | 1600 | 0.0094 | 0.9978 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_backtranslation-2
|
ThuyNT03
| 2023-09-05T02:23:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:35:42Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_backtranslation-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3357
- Accuracy: 0.66
- F1: 0.6673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0185 | 1.0 | 86 | 0.8134 | 0.65 | 0.5550 |
| 0.6948 | 2.0 | 172 | 0.9228 | 0.65 | 0.6376 |
| 0.5272 | 3.0 | 258 | 0.9715 | 0.69 | 0.6920 |
| 0.3985 | 4.0 | 344 | 1.0097 | 0.7 | 0.7042 |
| 0.3273 | 5.0 | 430 | 1.0340 | 0.7 | 0.7067 |
| 0.2035 | 6.0 | 516 | 1.1582 | 0.68 | 0.6870 |
| 0.1705 | 7.0 | 602 | 1.2932 | 0.66 | 0.6673 |
| 0.1303 | 8.0 | 688 | 1.3357 | 0.66 | 0.6673 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kuokxuen/marketmail
|
kuokxuen
| 2023-09-05T02:20:51Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T02:20:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Guanglong/mojing-llm-7b
|
Guanglong
| 2023-09-05T02:05:44Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T01:20:00Z |
---
license: apache-2.0
---
We have used mojing-llm dataset(https://huggingface.co/datasets/Guanglong/mojing-llm) to sft finetune this model on llama-2-7b.<br />
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
|
ThuyNT03
| 2023-09-05T02:04:06Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:19:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8468
- Accuracy: 0.69
- F1: 0.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0982 | 1.0 | 87 | 0.9995 | 0.47 | 0.4137 |
| 0.8884 | 2.0 | 174 | 0.7521 | 0.65 | 0.6032 |
| 0.7533 | 3.0 | 261 | 0.7130 | 0.64 | 0.6364 |
| 0.6259 | 4.0 | 348 | 0.7598 | 0.68 | 0.6865 |
| 0.5278 | 5.0 | 435 | 0.7066 | 0.7 | 0.7053 |
| 0.4336 | 6.0 | 522 | 0.7901 | 0.7 | 0.7060 |
| 0.3516 | 7.0 | 609 | 0.8106 | 0.69 | 0.6976 |
| 0.2859 | 8.0 | 696 | 0.8468 | 0.69 | 0.6959 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Amber9722/sd-class-butterflies-32
|
Amber9722
| 2023-09-05T01:56:18Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-05T01:56:09Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Amber9722/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
|
ThuyNT03
| 2023-09-05T01:54:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:09:25Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9125
- Accuracy: 0.71
- F1: 0.7091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.101 | 1.0 | 84 | 1.0494 | 0.46 | 0.3728 |
| 0.9323 | 2.0 | 168 | 0.7962 | 0.59 | 0.5689 |
| 0.7109 | 3.0 | 252 | 0.7447 | 0.71 | 0.7004 |
| 0.587 | 4.0 | 336 | 0.7251 | 0.71 | 0.7104 |
| 0.4611 | 5.0 | 420 | 0.8001 | 0.68 | 0.6770 |
| 0.3668 | 6.0 | 504 | 0.8589 | 0.72 | 0.7229 |
| 0.291 | 7.0 | 588 | 0.8900 | 0.69 | 0.6894 |
| 0.2505 | 8.0 | 672 | 0.9125 | 0.71 | 0.7091 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-base-2ex-bridging-1epochs
|
nbogdan
| 2023-09-05T01:49:59Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T01:44:00Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-bridging-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
thissayantan/dreambooth-sayantan
|
thissayantan
| 2023-09-05T01:48:42Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-05T01:48:40Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of sayantan person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
|
ThuyNT03
| 2023-09-05T01:32:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:51:06Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1237
- Accuracy: 0.71
- F1: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0509 | 1.0 | 87 | 0.8383 | 0.59 | 0.5441 |
| 0.7214 | 2.0 | 174 | 0.7218 | 0.72 | 0.72 |
| 0.5758 | 3.0 | 261 | 0.7535 | 0.69 | 0.6956 |
| 0.4321 | 4.0 | 348 | 0.7413 | 0.73 | 0.7360 |
| 0.3364 | 5.0 | 435 | 0.8328 | 0.72 | 0.7269 |
| 0.2712 | 6.0 | 522 | 0.9267 | 0.72 | 0.7255 |
| 0.1902 | 7.0 | 609 | 1.0811 | 0.7 | 0.7074 |
| 0.1351 | 8.0 | 696 | 1.1237 | 0.71 | 0.7165 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
honglinggoh/product_desc_marketing_email
|
honglinggoh
| 2023-09-05T01:28:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:28:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
duwuonline/my-translation-helsinki2
|
duwuonline
| 2023-09-05T01:27:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:duwuonline/my-translation-helsinki",
"base_model:finetune:duwuonline/my-translation-helsinki",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-05T01:00:23Z |
---
license: apache-2.0
base_model: duwuonline/my-translation-helsinki
tags:
- generated_from_trainer
model-index:
- name: my-translation-helsinki2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-translation-helsinki2
This model is a fine-tuned version of [duwuonline/my-translation-helsinki](https://huggingface.co/duwuonline/my-translation-helsinki) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ckandemir/bert-base-uncased-issues-128
|
ckandemir
| 2023-09-05T01:26:28Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-04T19:45:22Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0966 | 1.0 | 291 | 1.6190 |
| 1.6197 | 2.0 | 582 | 1.5317 |
| 1.485 | 3.0 | 873 | 1.4164 |
| 1.3992 | 4.0 | 1164 | 1.4064 |
| 1.3219 | 5.0 | 1455 | 1.3900 |
| 1.2851 | 6.0 | 1746 | 1.2096 |
| 1.2328 | 7.0 | 2037 | 1.3019 |
| 1.2113 | 8.0 | 2328 | 1.2779 |
| 1.1674 | 9.0 | 2619 | 1.2312 |
| 1.1443 | 10.0 | 2910 | 1.1830 |
| 1.1171 | 11.0 | 3201 | 1.1692 |
| 1.1067 | 12.0 | 3492 | 1.2364 |
| 1.0846 | 13.0 | 3783 | 1.1871 |
| 1.0815 | 14.0 | 4074 | 1.1354 |
| 1.054 | 15.0 | 4365 | 1.1771 |
| 1.0565 | 16.0 | 4656 | 1.2137 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JasonTheDeveloper/squad-bloom-3b
|
JasonTheDeveloper
| 2023-09-05T01:21:55Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:21:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
|
ThuyNT03
| 2023-09-05T01:13:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:32:59Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1138
- Accuracy: 0.75
- F1: 0.7539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0576 | 1.0 | 85 | 0.8693 | 0.6 | 0.5283 |
| 0.7822 | 2.0 | 170 | 0.8331 | 0.69 | 0.6665 |
| 0.6156 | 3.0 | 255 | 0.7210 | 0.72 | 0.7194 |
| 0.4447 | 4.0 | 340 | 0.8139 | 0.66 | 0.6645 |
| 0.3252 | 5.0 | 425 | 0.9348 | 0.67 | 0.6776 |
| 0.2105 | 6.0 | 510 | 0.9185 | 0.77 | 0.7718 |
| 0.1437 | 7.0 | 595 | 1.0530 | 0.75 | 0.7539 |
| 0.1479 | 8.0 | 680 | 1.1138 | 0.75 | 0.7539 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Murphy08/marketmail
|
Murphy08
| 2023-09-05T01:12:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:12:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
schrilax/marketing_email
|
schrilax
| 2023-09-05T01:10:46Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-05T00:43:50Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MichelNivard/codellama_Rbase_instr
|
MichelNivard
| 2023-09-05T01:08:30Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T10:24:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
jschew39/marketmail
|
jschew39
| 2023-09-05T01:07:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:07:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
RafaelMayer/roberta-copec-2
|
RafaelMayer
| 2023-09-05T01:06:47Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T01:05:40Z |
---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/roberta-copec-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/roberta-copec-2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6476
- Validation Loss: 0.6356
- Train Accuracy: 0.7647
- Train Precision: [0. 0.76470588]
- Train Precision W: 0.5848
- Train Recall: [0. 1.]
- Train Recall W: 0.7647
- Train F1: [0. 0.86666667]
- Train F1 W: 0.6627
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.6476 | 0.6356 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TonySky/results
|
TonySky
| 2023-09-05T01:04:37Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-05T01:04:19Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
|
ThuyNT03
| 2023-09-05T01:02:18Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:22:38Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3966
- Accuracy: 0.67
- F1: 0.6754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0222 | 1.0 | 87 | 0.8095 | 0.65 | 0.6380 |
| 0.6487 | 2.0 | 174 | 0.7375 | 0.67 | 0.6640 |
| 0.4554 | 3.0 | 261 | 0.7962 | 0.71 | 0.7084 |
| 0.3194 | 4.0 | 348 | 0.8102 | 0.71 | 0.7161 |
| 0.2303 | 5.0 | 435 | 1.1793 | 0.65 | 0.6607 |
| 0.1728 | 6.0 | 522 | 1.1697 | 0.72 | 0.7245 |
| 0.127 | 7.0 | 609 | 1.3509 | 0.69 | 0.6943 |
| 0.0927 | 8.0 | 696 | 1.3966 | 0.67 | 0.6754 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation-2
|
ThuyNT03
| 2023-09-05T00:58:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:51:23Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1103
- Accuracy: 0.74
- F1: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0442 | 1.0 | 87 | 0.7191 | 0.69 | 0.6652 |
| 0.7545 | 2.0 | 174 | 0.6726 | 0.73 | 0.7264 |
| 0.5743 | 3.0 | 261 | 0.6634 | 0.72 | 0.7157 |
| 0.4342 | 4.0 | 348 | 0.7801 | 0.73 | 0.7270 |
| 0.3244 | 5.0 | 435 | 0.8782 | 0.75 | 0.7438 |
| 0.2421 | 6.0 | 522 | 1.0173 | 0.73 | 0.7235 |
| 0.167 | 7.0 | 609 | 1.0822 | 0.75 | 0.7431 |
| 0.1546 | 8.0 | 696 | 1.1103 | 0.74 | 0.7315 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_BERT-2
|
ThuyNT03
| 2023-09-05T00:51:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:43:25Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_BERT-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7628
- Accuracy: 0.74
- F1: 0.7395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0907 | 1.0 | 88 | 0.9363 | 0.6 | 0.4898 |
| 0.8904 | 2.0 | 176 | 0.7224 | 0.67 | 0.6422 |
| 0.7711 | 3.0 | 264 | 0.6336 | 0.75 | 0.7461 |
| 0.6682 | 4.0 | 352 | 0.6622 | 0.75 | 0.7403 |
| 0.5583 | 5.0 | 440 | 0.6570 | 0.77 | 0.7647 |
| 0.4586 | 6.0 | 528 | 0.7534 | 0.77 | 0.7689 |
| 0.4152 | 7.0 | 616 | 0.7863 | 0.73 | 0.7279 |
| 0.3482 | 8.0 | 704 | 0.7628 | 0.74 | 0.7395 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-base-2ex-elaboration-1epochs
|
nbogdan
| 2023-09-05T00:45:42Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T00:40:36Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-elaboration-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
|
ThuyNT03
| 2023-09-05T00:43:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:35:33Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7407
- Accuracy: 0.78
- F1: 0.7740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.085 | 1.0 | 88 | 0.9923 | 0.66 | 0.6391 |
| 0.9033 | 2.0 | 176 | 0.6803 | 0.74 | 0.7342 |
| 0.7906 | 3.0 | 264 | 0.7208 | 0.71 | 0.6992 |
| 0.6859 | 4.0 | 352 | 0.6374 | 0.75 | 0.7483 |
| 0.5591 | 5.0 | 440 | 0.7554 | 0.76 | 0.7539 |
| 0.4588 | 6.0 | 528 | 0.8309 | 0.74 | 0.7337 |
| 0.3967 | 7.0 | 616 | 0.6894 | 0.81 | 0.8063 |
| 0.3339 | 8.0 | 704 | 0.7407 | 0.78 | 0.7740 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/llama-2-13b-chat-limarp-v2-merged
|
actionpace
| 2023-09-05T00:43:09Z | 1 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T17:29:04Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* llama-2-13b-chat-limarp-v2-merged_Q5_1_4K.gguf
* llama-2-13b-chat-limarp-v2-merged_Q5_1_8K.gguf
**Source:** [Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun)
**Source Model:** [llama-2-13b-chat-limarp-v2-merged](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged)
**Source models for Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (Merge)**
- [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
- [lemonilia/limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) (Lora)
**Models utilizing Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged**
- [Undi95/UndiMix-v1-13b](https://huggingface.co/Undi95/UndiMix-v1-13b) ([Ref](https://huggingface.co/actionpace/UndiMix-v1-13b)) (Merge)
|
actionpace/Hermes-Kimiko-13B-f16
|
actionpace
| 2023-09-05T00:38:56Z | 11 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T00:15:11Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Hermes-Kimiko-13B-f16_Q5_1_4K.gguf
* Hermes-Kimiko-13B-f16_Q5_1_8K.gguf
**Source:** [Blackroot](https://huggingface.co/Blackroot)
**Source Model:** [Hermes-Kimiko-13B-f16](https://huggingface.co/Blackroot/Hermes-Kimiko-13B-f16)
**Source models for Blackroot/Hermes-Kimiko-13B-f16 (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [nRuaif/Kimiko_13B](https://huggingface.co/nRuaif/Kimiko_13B) (Lora)
|
actionpace/FrankensteinsMonster-13B
|
actionpace
| 2023-09-05T00:35:39Z | 5 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T00:12:20Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* FrankensteinsMonster-13B_Q5_1_4K.gguf
* FrankensteinsMonster-13B_Q5_1_8K.gguf
**Source:** [Blackroot](https://huggingface.co/Blackroot)
**Source Model:** [FrankensteinsMonster-13B](https://huggingface.co/Blackroot/FrankensteinsMonster-13B)
**Source models for Blackroot/FrankensteinsMonster-13B (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [Blackroot/Llama-2-13B-Storywriter-LORA](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA) (Lora)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
RafaelMayer/roberta-copec-1
|
RafaelMayer
| 2023-09-05T00:34:46Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:26:18Z |
---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/roberta-copec-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/roberta-copec-1
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6572
- Validation Loss: 0.6316
- Train Accuracy: 0.8235
- Train Precision: [0. 0.82352941]
- Train Precision W: 0.6782
- Train Recall: [0. 1.]
- Train Recall W: 0.8235
- Train F1: [0. 0.90322581]
- Train F1 W: 0.7438
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.6572 | 0.6316 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dimitarrskv/rl_course_vizdoom_health_gathering_supreme
|
dimitarrskv
| 2023-09-05T00:23:13Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T11:17:49Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.96 +/- 5.26
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dimitarrskv/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
RafaelMayer/bert-copec-1
|
RafaelMayer
| 2023-09-05T00:22:58Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:55:23Z |
---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/bert-copec-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/bert-copec-1
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1258
- Validation Loss: 0.4666
- Train Accuracy: 0.7647
- Train Precision: [0. 0.8125]
- Train Precision W: 0.6691
- Train Recall: [0. 0.92857143]
- Train Recall W: 0.7647
- Train F1: [0. 0.86666667]
- Train F1 W: 0.7137
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:-----------------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.5926 | 0.4830 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 1 |
| 0.3224 | 0.5166 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 2 |
| 0.2419 | 0.6137 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 3 |
| 0.2583 | 0.5984 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 4 |
| 0.2308 | 0.5345 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 5 |
| 0.2178 | 0.4710 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 6 |
| 0.1861 | 0.4562 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 7 |
| 0.1456 | 0.4568 | 0.7647 | [0. 0.8125] | 0.6691 | [0. 0.92857143] | 0.7647 | [0. 0.86666667] | 0.7137 | 8 |
| 0.1258 | 0.4666 | 0.7647 | [0. 0.8125] | 0.6691 | [0. 0.92857143] | 0.7647 | [0. 0.86666667] | 0.7137 | 9 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
adyprat/Reinforce-pcopv0
|
adyprat
| 2023-09-05T00:22:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T21:23:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pcopv0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.60 +/- 21.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
|
ThuyNT03
| 2023-09-05T00:17:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:09:29Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9737
- Accuracy: 0.72
- F1: 0.7141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0807 | 1.0 | 88 | 0.9024 | 0.64 | 0.6254 |
| 0.8512 | 2.0 | 176 | 0.6824 | 0.75 | 0.7396 |
| 0.7009 | 3.0 | 264 | 0.6368 | 0.74 | 0.7363 |
| 0.5649 | 4.0 | 352 | 0.6994 | 0.76 | 0.7494 |
| 0.458 | 5.0 | 440 | 0.8683 | 0.74 | 0.7300 |
| 0.3409 | 6.0 | 528 | 1.0337 | 0.7 | 0.6787 |
| 0.2964 | 7.0 | 616 | 0.9357 | 0.75 | 0.7459 |
| 0.2305 | 8.0 | 704 | 0.9737 | 0.72 | 0.7141 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
VegaKH/VenusXL
|
VegaKH
| 2023-09-05T00:12:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-11T14:05:29Z |
---
license: creativeml-openrail-m
---
|
YiYiXu/yiyi_kandinsky_decoder
|
YiYiXu
| 2023-09-05T00:09:15Z | 4 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"kandinsky",
"text-to-image",
"dataset:lambdalabs/pokemon-blip-captions",
"base_model:kandinsky-community/kandinsky-2-2-decoder",
"base_model:finetune:kandinsky-community/kandinsky-2-2-decoder",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:KandinskyV22Pipeline",
"region:us"
] |
text-to-image
| 2023-09-04T23:38:45Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-decoder
datasets:
- lambdalabs/pokemon-blip-captions
prior:
- kandinsky-community/kandinsky-2-2-prior
tags:
- kandinsky
- text-to-image
- diffusers
inference: true
---
# Finetuning - YiYiXu/yiyi_kandinsky_decoder
This pipeline was finetuned from **kandinsky-community/kandinsky-2-2-decoder** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A robot pokemon, 4k photo']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("YiYiXu/yiyi_kandinsky_decoder", torch_dtype=torch.float16)
prompt = "A robot pokemon, 4k photo"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 2
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 1
* Image resolution: 768
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/yiyixu/text2image-fine-tune/runs/znfqqva8).
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_w2v-2
|
ThuyNT03
| 2023-09-05T00:01:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:53:17Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2389
- Accuracy: 0.77
- F1: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.054 | 1.0 | 86 | 0.8403 | 0.68 | 0.6696 |
| 0.7333 | 2.0 | 172 | 0.5967 | 0.76 | 0.7598 |
| 0.5218 | 3.0 | 258 | 0.6397 | 0.77 | 0.7688 |
| 0.3402 | 4.0 | 344 | 0.7154 | 0.79 | 0.7825 |
| 0.232 | 5.0 | 430 | 0.8591 | 0.76 | 0.7586 |
| 0.1443 | 6.0 | 516 | 1.0384 | 0.78 | 0.7774 |
| 0.1122 | 7.0 | 602 | 1.1989 | 0.76 | 0.7566 |
| 0.0917 | 8.0 | 688 | 1.2389 | 0.77 | 0.7662 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
johaanm/test-planner-alpha-V7.0
|
johaanm
| 2023-09-04T23:57:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T23:57:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
MattBatchelor/ppo-LunarLander-v2
|
MattBatchelor
| 2023-09-04T23:56:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T23:55:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.31 +/- 20.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AndrewMarcHarris/ppo-LunarLander-v2
|
AndrewMarcHarris
| 2023-09-04T23:55:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T23:55:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.76 +/- 12.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StudentLLM/Alpagasus-2-13B-QLoRA
|
StudentLLM
| 2023-09-04T23:34:51Z | 3 | 0 |
peft
|
[
"peft",
"en",
"region:us"
] | null | 2023-08-09T13:08:03Z |
---
library_name: peft
language:
- en
---
# Model Details
Please check our [Github Repository](https://github.com/gauss5930/AlpaGasus2-QLoRA/tree/main)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0080
|
bigmorning
| 2023-09-04T23:32:37Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T23:32:28Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0080
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 0.0340
- Train Wermet: 4.3375
- Validation Loss: 0.7900
- Validation Accuracy: 0.0211
- Validation Wermet: 4.0113
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
| 0.0042 | 0.0340 | 5.5984 | 0.7914 | 0.0209 | 8.3869 | 65 |
| 0.0205 | 0.0339 | 9.9212 | 0.7811 | 0.0209 | 21.1156 | 66 |
| 0.0184 | 0.0339 | 8.3175 | 0.7619 | 0.0210 | 0.5360 | 67 |
| 0.0080 | 0.0340 | 0.6373 | 0.7554 | 0.0211 | 0.4090 | 68 |
| 0.0052 | 0.0340 | 0.5550 | 0.7528 | 0.0211 | 0.3938 | 69 |
| 0.0038 | 0.0340 | 0.4678 | 0.7551 | 0.0211 | 0.7911 | 70 |
| 0.0032 | 0.0340 | 1.1632 | 0.7617 | 0.0211 | 0.5495 | 71 |
| 0.0028 | 0.0340 | 0.7869 | 0.7643 | 0.0211 | 1.4089 | 72 |
| 0.0025 | 0.0340 | 1.5997 | 0.7681 | 0.0211 | 1.1413 | 73 |
| 0.0023 | 0.0340 | 1.7042 | 0.7719 | 0.0211 | 1.7576 | 74 |
| 0.0021 | 0.0340 | 2.3363 | 0.7750 | 0.0211 | 2.2434 | 75 |
| 0.0019 | 0.0340 | 2.9550 | 0.7777 | 0.0211 | 2.3071 | 76 |
| 0.0017 | 0.0340 | 3.1713 | 0.7831 | 0.0211 | 3.3338 | 77 |
| 0.0015 | 0.0340 | 3.9077 | 0.7852 | 0.0211 | 3.6442 | 78 |
| 0.0014 | 0.0340 | 4.3375 | 0.7900 | 0.0211 | 4.0113 | 79 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
actionpace/robin-13b-v2-delta
|
actionpace
| 2023-09-04T23:26:03Z | 24 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-03T16:42:06Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* robin-13b-v2-delta_Q5_1_4K.gguf
* robin-13b-v2-delta_Q5_1_8K.gguf
**Source:** [OptimalScale](https://huggingface.co/OptimalScale)
**Source Model:** [robin-13b-v2-delta](https://huggingface.co/OptimalScale/robin-13b-v2-delta)
**Source models for OptimalScale/robin-13b-v2-delta (Finetune)**
- [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b)
|
elami/vit-base-patch16-224-finetuned-flower
|
elami
| 2023-09-04T23:24:28Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-04T23:13:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
substratusai/Llama-2-13B-chat-GGUF
|
substratusai
| 2023-09-04T23:19:18Z | 0 | 5 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"en",
"arxiv:2307.09288",
"region:us"
] |
text-generation
| 2023-08-28T04:29:56Z |
---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Llama 2 13B Chat GGUF model
Original model: https://huggingface.co/meta-llama/Llama-2-13b-chat
This is Q4_K_S version
Original model readme:
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software โbug,โ or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
matsuo-lab/weblab-10b
|
matsuo-lab
| 2023-09-04T23:17:28Z | 1,883 | 63 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-04T04:55:47Z |
---
license: cc-by-nc-4.0
---
# weblab-10b
# Overview
This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
* **Library**
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
* **Model architecture**
A 36-layer, 4864-hidden-size transformer-based language model.
* **Pre-training**
The model was trained on around **600B** tokens from a mixture of the following corpora.
- [Japanese C4](https://huggingface.co/datasets/mc4)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
* **Model Series**
| Variant | Link |
| :-- | :--|
| weblab-10b-instruction-sft | https://huggingface.co/matsuo-lab/weblab-10b-instruction-sft |
| weblab-10b | https://huggingface.co/matsuo-lab/weblab-10b |
* **Authors**
Takeshi Kojima
---
# Benchmarking
* **Japanese benchmark : JGLUE 8-task (2023-08-27)**
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/2f1583c0735eacdfdfa5b7d656074b69577b6774) library for evaluation.*
- *The 8-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, JSQuAD-1.1, jaqket_v2-0.2, xlsum_ja-1.0, xwinograd_ja, and mgsm-1.0.*
- *model loading is performed with float16, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
- *The number of few-shots is 3,3,3,2,1,1,0,5.*
- *special_tokens_map.json is modified to avoid errors during the evaluation of the second half benchmarks. As a result, the results of the first half benchmarks became slightly different.*
model | average | jcommonsenseqa | jnli | marc_ja | jsquad | jaqket_v2 | xlsum_ja | xwinograd_ja | mgsm
| :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- |
weblab-10b-instruction-sft | 59.11 | 74.62 | 66.56 | 95.49 | 78.34 | 63.32 | 20.57 | 71.95 | 2
weblab-10b | 50.74 | 66.58 | 53.74 | 82.07 | 62.94 | 56.19 | 10.03 | 71.95 | 2.4
* **Japanese benchmark : JGLUE 4-task (2023-08-18)**
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/2f1583c0735eacdfdfa5b7d656074b69577b6774) library for evaluation.*
- *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.*
- *model loading is performed with float16, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
- *The number of few-shots is 3,3,3,2.*
| Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
| :-- | :-- | :-- | :-- | :-- | :-- |
| weblab-10b-instruction-sft | 78.78 | 74.35 | 65.65 | 96.06 | 79.04 |
| weblab-10b | 66.38 | 65.86 | 54.19 | 84.49 | 60.98 |
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("matsuo-lab/weblab-10b")
model = AutoModelForCausalLM.from_pretrained("matsuo-lab/weblab-10b", torch_dtype=torch.float16)
if torch.cuda.is_available():
model = model.to("cuda")
text = "ๅพ่ผฉใฏ็ซใงใใใ"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.95
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Licenese
[cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/)
|
CzarnyRycerz/Reinforce-cartpole-1
|
CzarnyRycerz
| 2023-09-04T23:13:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T22:31:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zzzotop/cross-lingual-transfer-ner-demo-1
|
zzzotop
| 2023-09-04T23:10:03Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-24T19:45:16Z |
Cross-lingual transfer demo with Faroese named entity recognition. Model is trained on Icelandic, the closest living relative. Based on 'Transfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese', Snรฆbjarnarson et al. (2023).
vesteinn/IceBERT finetuned on vesteinn/sosialurin-faroese-ner. Trained for 5 epochs, 22385 steps, lr 2e-5. <b>Training loop written in native pytorch.</b>
Evaluation over all batches, methodology inspired by seqeval:
<br>
Accuracy: 0.54808
<br>
Precision: 0.54808 (micro) 0.50460 (macro) 0.31040 (weighted)
<br>
Recall: 0.54808 (micro) 0.75972 (macro) 0.54808 (weighted)
<br>
F1: 0.54808 (micro) 0.56363 (macro) 0.39525 (weighted)
|
FourthBrainGenAI/marketmail
|
FourthBrainGenAI
| 2023-09-04T23:07:40Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T23:07:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
nbogdan/flant5-base-2ex-overall-1epochs
|
nbogdan
| 2023-09-04T22:53:58Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T22:53:49Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-overall-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0065
|
bigmorning
| 2023-09-04T22:52:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T22:52:47Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0065
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0065
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0048
- Train Accuracy: 0.0340
- Train Wermet: 4.9652
- Validation Loss: 0.7821
- Validation Accuracy: 0.0210
- Validation Wermet: 4.9937
- Epoch: 64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Kapiche/twitter-roberta-base-sentiment-latest
|
Kapiche
| 2023-09-04T22:49:50Z | 286 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T13:23:27Z |
---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
```
### References
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
author = "Camacho-collados, Jose and
Rezaee, Kiamehr and
Riahi, Talayeh and
Ushio, Asahi and
Loureiro, Daniel and
Antypas, Dimosthenis and
Boisson, Joanne and
Espinosa Anke, Luis and
Liu, Fangyu and
Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-demos.5",
pages = "38--49"
}
```
```
@inproceedings{loureiro-etal-2022-timelms,
title = "{T}ime{LM}s: Diachronic Language Models from {T}witter",
author = "Loureiro, Daniel and
Barbieri, Francesco and
Neves, Leonardo and
Espinosa Anke, Luis and
Camacho-collados, Jose",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-demo.25",
doi = "10.18653/v1/2022.acl-demo.25",
pages = "251--260"
}
```
|
daochf/Lora-MetaLlamaChat7b-PuceDS-v03
|
daochf
| 2023-09-04T22:49:13Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T23:39:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
gustavodemoura/q-FrozenLake-v1-4x4-noSlippery
|
gustavodemoura
| 2023-09-04T22:48:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T22:48:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gustavodemoura/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rshei/layoutlmv3-finetuned-cord_100
|
rshei
| 2023-09-04T22:47:01Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-29T05:48:50Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9243884358784284
- name: Recall
type: recall
value: 0.9333832335329342
- name: F1
type: f1
value: 0.9288640595903166
- name: Accuracy
type: accuracy
value: 0.9363327674023769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Precision: 0.9244
- Recall: 0.9334
- F1: 0.9289
- Accuracy: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 0.5174 | 0.8469 | 0.8735 | 0.8600 | 0.8790 |
| 0.5511 | 8.33 | 500 | 0.3975 | 0.8999 | 0.9147 | 0.9072 | 0.9194 |
| 0.5511 | 12.5 | 750 | 0.3872 | 0.9015 | 0.9184 | 0.9099 | 0.9189 |
| 0.1802 | 16.67 | 1000 | 0.3416 | 0.9180 | 0.9296 | 0.9238 | 0.9338 |
| 0.1802 | 20.83 | 1250 | 0.3311 | 0.9159 | 0.9289 | 0.9223 | 0.9359 |
| 0.0836 | 25.0 | 1500 | 0.3457 | 0.9192 | 0.9281 | 0.9236 | 0.9334 |
| 0.0836 | 29.17 | 1750 | 0.3347 | 0.9202 | 0.9319 | 0.9260 | 0.9291 |
| 0.0473 | 33.33 | 2000 | 0.3677 | 0.9194 | 0.9304 | 0.9249 | 0.9253 |
| 0.0473 | 37.5 | 2250 | 0.3433 | 0.9279 | 0.9341 | 0.9310 | 0.9376 |
| 0.0342 | 41.67 | 2500 | 0.3467 | 0.9244 | 0.9334 | 0.9289 | 0.9363 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jrazi/persian-poem-classifier
|
jrazi
| 2023-09-04T22:42:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"fa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T17:11:22Z |
---
license: mit
language:
- fa
metrics:
- f1
- precision
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
---
# Persian Poem Classifier Based on ParsBERT
## Model Description
This model, named "Persian Poem Classifier," is based on the ParsBERT architecture and has been fine-tuned to classify Persian poems. Specifically, the model can evaluate whether a given piece of text is poetic, whether it adheres to a valid poetic structure, and whether it captures the style of a specific poet.
### Features
- **Multi-task Classification**: Determines if the text is poetic, if it's a valid poem, and if it conforms to a certain poet's style.
- **Language Support**: Specialized for Persian language text.
- **High Accuracy**: Fine-tuned using a diverse dataset of Persian poems.
## Intended Use
This model is intended to be used by researchers, poets, and NLP enthusiasts who are interested in the automated analysis of Persian poetry. It can be utilized in applications ranging from educational platforms to advanced poetry-generating algorithms.
## Limitations
- The model has been trained on a specific set of poets and may not generalize well to other styles.
- It assumes that the input text is in Persian and adheres to the specific poetic structures it has been trained on.
## Installation & Usage
You can easily install the model using the Hugging Face `transformers` library as follows:
```bash
pip install transformers
```
To classify a poem, you can use the following code snippet:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jrazi/persian-poem-classifier")
model = AutoModelForSequenceClassification.from_pretrained("jrazi/persian-poem-classifier")
text = "Your Persian poem here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
## Data Source
The model is fine-tuned on a curated dataset of Persian poems featuring various poets. The dataset contains multi-label annotations to evaluate the poetic nature, structure, and style conformity of the text. For creating negative labels, the model uses some of the publicly available persian text corporas. In addition to that, we used data augmentation techniques to further diversify our model, in order to make it generalize better.
## Evaluation Metrics
The model has been evaluated using standard classification metrics like accuracy, F1-score, and ROC AUC for each of the multi-task objectives.
| Metric | Is Poetic | Is Valid Poem | Has Poet Style |
| ------ | --------- | ------------- | -------------- |
| F1 | 0.66 | 0.66 | 0.59 |
| Prec | 0.81 | 0.77 | 0.71 |
| Acc | 0.85 | 0.84 | 0.64 |
---
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0060
|
bigmorning
| 2023-09-04T22:39:37Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T22:39:28Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0060
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0096
- Train Accuracy: 0.0340
- Train Wermet: 1.9230
- Validation Loss: 0.7664
- Validation Accuracy: 0.0209
- Validation Wermet: 2.4432
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
kikinamatata/model_2
|
kikinamatata
| 2023-09-04T22:37:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-04T19:56:25Z |
---
license: creativeml-openrail-m
base_model: models/model_1
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - kikinamatata/model_2
This pipeline was finetuned from **models/model_1** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: None:
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0050
|
bigmorning
| 2023-09-04T22:13:05Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T22:12:56Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0050
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0050
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0499
- Train Accuracy: 0.0339
- Train Wermet: 0.0487
- Validation Loss: 0.7587
- Validation Accuracy: 0.0208
- Validation Wermet: 0.3030
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
acdg1214/a2c-PandaReachDense-v3
|
acdg1214
| 2023-09-04T22:01:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T21:55:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.15 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0045
|
bigmorning
| 2023-09-04T21:59:45Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:59:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1121
- Train Accuracy: 0.0335
- Train Wermet: 0.0292
- Validation Loss: 0.7589
- Validation Accuracy: 0.0207
- Validation Wermet: 0.2647
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
rohanbalkondekar/rohan_dreambooth
|
rohanbalkondekar
| 2023-09-04T21:52:29Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-04T21:52:28Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of rohan balkondekar
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
nbogdan/flant5-base-1ex-bridging-1epochs
|
nbogdan
| 2023-09-04T21:50:03Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:49:53Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-bridging-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0040
|
bigmorning
| 2023-09-04T21:46:29Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:46:21Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2291
- Train Accuracy: 0.0324
- Train Wermet: 0.0706
- Validation Loss: 0.7876
- Validation Accuracy: 0.0206
- Validation Wermet: 0.2755
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nbogdan/flant5-base-1ex-elaboration-1epochs
|
nbogdan
| 2023-09-04T21:34:54Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:34:46Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-elaboration-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Sbsbvdvd/Shshs
|
Sbsbvdvd
| 2023-09-04T21:34:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-04T21:33:59Z |
---
license: creativeml-openrail-m
---
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0035
|
bigmorning
| 2023-09-04T21:33:13Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:33:03Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4228
- Train Accuracy: 0.0308
- Train Wermet: 0.1466
- Validation Loss: 0.8735
- Validation Accuracy: 0.0202
- Validation Wermet: 0.3026
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
JanSt/gbert-base-finetuned-twitter_
|
JanSt
| 2023-09-04T21:31:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-04T08:13:18Z |
---
license: mit
base_model: deepset/gbert-base
tags:
- generated_from_trainer
model-index:
- name: gbert-base-finetuned-twitter_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base-finetuned-twitter_
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1933 | 1.0 | 4180 | 1.9612 |
| 2.0051 | 2.0 | 8360 | 1.8795 |
| 1.939 | 3.0 | 12540 | 1.8310 |
| 1.8928 | 4.0 | 16720 | 1.8013 |
| 1.8594 | 5.0 | 20900 | 1.7730 |
| 1.8336 | 6.0 | 25080 | 1.7702 |
| 1.8145 | 7.0 | 29260 | 1.7449 |
| 1.7963 | 8.0 | 33440 | 1.7277 |
| 1.7806 | 9.0 | 37620 | 1.7105 |
| 1.7682 | 10.0 | 41800 | 1.7061 |
| 1.7584 | 11.0 | 45980 | 1.7041 |
| 1.7454 | 12.0 | 50160 | 1.6899 |
| 1.7374 | 13.0 | 54340 | 1.6850 |
| 1.7295 | 14.0 | 58520 | 1.6856 |
| 1.7232 | 15.0 | 62700 | 1.6819 |
| 1.715 | 16.0 | 66880 | 1.6730 |
| 1.7101 | 17.0 | 71060 | 1.6723 |
| 1.7057 | 18.0 | 75240 | 1.6655 |
| 1.7038 | 19.0 | 79420 | 1.6617 |
| 1.702 | 20.0 | 83600 | 1.6625 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
facebook/regnet-x-160
|
facebook
| 2023-09-04T21:27:33Z | 402 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:27:57Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
facebook/regnet-y-320
|
facebook
| 2023-09-04T21:23:50Z | 232 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:43:36Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
facebook/regnet-x-320
|
facebook
| 2023-09-04T21:23:40Z | 227 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:29:28Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
facebook/convnext-large-384
|
facebook
| 2023-09-04T21:22:21Z | 243 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convnext",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-384")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-384")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
volvoDon/petro-daemon
|
volvoDon
| 2023-09-04T21:21:25Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-04T20:11:04Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: volvoDon/petro-daemon
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# volvoDon/petro-daemon
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on a [DataSet of petrologic cross sections](https://huggingface.co/datasets/volvoDon/petrology-sections).
It achieves the following results on the evaluation set:
- Train Loss: 0.8890
- Validation Loss: 1.1803
- Train Accuracy: 0.6
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
Currently it is just a proof of concept and does a great job identifiying Olivine
It currently is not ready for a production enviroment but the results are promising, with an improved dataset I'm confident better results could be acheived.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6519 | 1.7095 | 0.2 | 0 |
| 1.5905 | 1.6747 | 0.2 | 1 |
| 1.5690 | 1.6342 | 0.2 | 2 |
| 1.5170 | 1.5931 | 0.2 | 3 |
| 1.4764 | 1.5528 | 0.6 | 4 |
| 1.3835 | 1.5079 | 0.6 | 5 |
| 1.3420 | 1.4717 | 0.6 | 6 |
| 1.3171 | 1.4232 | 0.6 | 7 |
| 1.2897 | 1.3905 | 0.6 | 8 |
| 1.2702 | 1.3794 | 0.6 | 9 |
| 1.2023 | 1.3351 | 0.6 | 10 |
| 1.1480 | 1.3384 | 0.6 | 11 |
| 1.1434 | 1.3419 | 0.6 | 12 |
| 1.0499 | 1.3226 | 0.6 | 13 |
| 1.0672 | 1.2647 | 0.6 | 14 |
| 1.0526 | 1.1533 | 0.6 | 15 |
| 1.0184 | 1.1546 | 0.6 | 16 |
| 0.9505 | 1.2491 | 0.6 | 17 |
| 0.9578 | 1.2809 | 0.4 | 18 |
| 0.8890 | 1.1803 | 0.6 | 19 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/LLAMA2-13B-Holodeck-1
|
actionpace
| 2023-09-04T21:21:09Z | 7 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T20:46:13Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* LLAMA2-13B-Holodeck-1_Q5_1_4K.gguf
* LLAMA2-13B-Holodeck-1_Q5_1_8K.gguf
**Source:** [KoboldAI](https://huggingface.co/KoboldAI)
**Source Model:** [LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
**Models utilizing KoboldAI/LLAMA2-13B-Holodeck-1**
- [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) ([Ref](https://huggingface.co/actionpace/Huginn-v3-13b)) (Finetune, kaiokendev/SuperCOT-dataset)
|
facebook/convnext-base-224-22k-1k
|
facebook
| 2023-09-04T21:09:35Z | 653 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convnext",
"image-classification",
"vision",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-224-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
nbogdan/flant5-base-1ex-overall-1epochs
|
nbogdan
| 2023-09-04T21:04:30Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:04:21Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-overall-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
darthlordvictor/generative-bloom-marketing-002
|
darthlordvictor
| 2023-09-04T20:56:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-29T02:38:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
rombodawg/LosslessMegaCoder-Falcon-40b-mini
|
rombodawg
| 2023-09-04T20:51:15Z | 1,426 | 2 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"dataset:rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-17T02:18:51Z |
---
license: apache-2.0
datasets:
- rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
---
___________________________
- Please note this model was not trained on the rombodawg/LosslessMegaCodeTrainingV3_MINI dataset, despite the name similarity. You can find the training data at the bottom of the model card labeled (megacode2-min100)
___________________________
This is one of the first models trained on the LosslessMegaCodeTrainingV2_1m_Evol_Uncensored dataset. The version of the dataset used for this model was filtered by removed any data with less than 100 tokens but plans for much more refined filtering are in the works
- This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant.
Prompt template:
- chatml format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
multi-line:
```
<|im_start|>system
{system message}<|im_end|>
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```
Gpt4all template:
- System prompt
```
<|im_start|>system
"Below is an instruction that describes a task. Write a response that appropriately completes the request."
```
- Prompt template
```
<|im_end|>
<|im_start|>user
"%1"<|im_end|>
<|im_start|>assistant
```
Oobagooba Text-Generation-Webui Template
- user:
```
<|im_start|>user
{User string}<|im_end|>
```
- bot:
```
<|im_start|>assistant
{Bot string}<|im_end|>
```
- turn_template:
```
<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n
```
- context:
```
<|im_start|>system
Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>
```
Current quantizations available:
- (COMING SOON)
The link for the full dataset is bellow:
- https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
Link for the filtered dataset used to make this model are bellow:
- https://huggingface.co/datasets/andreaskoepf/megacode2-min100
The original posting for this model was uploaded at the link bellow.
- https://huggingface.co/andreaskoepf/falcon-40b-megacode2
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0015
|
bigmorning
| 2023-09-04T20:40:15Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:40:08Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3069
- Train Accuracy: 0.0148
- Train Wermet: 0.6961
- Validation Loss: 3.1102
- Validation Accuracy: 0.0124
- Validation Wermet: 0.7609
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nbogdan/flant5-base-0ex-elaboration-1epochs
|
nbogdan
| 2023-09-04T20:35:49Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T20:35:39Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-0ex-elaboration-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-0ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated
|
reginaboateng
| 2023-09-04T20:31:22Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pubmedqa",
"dataset:pubmedqa",
"region:us"
] | null | 2023-09-04T20:31:19Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pubmedqa
datasets:
- pubmedqa
---
# Adapter `reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pubmedqa](https://adapterhub.ml/explore/pubmedqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_nos_updated
|
reginaboateng
| 2023-09-04T20:30:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:pubmedqa",
"bert",
"dataset:pubmedqa",
"region:us"
] | null | 2023-09-04T20:30:42Z |
---
tags:
- adapter-transformers
- adapterhub:pubmedqa
- bert
datasets:
- pubmedqa
---
# Adapter `reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_nos_updated` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pubmedqa](https://adapterhub.ml/explore/pubmedqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_nos_updated", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0010
|
bigmorning
| 2023-09-04T20:27:01Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:26:52Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6757
- Train Accuracy: 0.0136
- Train Wermet: 0.7548
- Validation Loss: 3.3141
- Validation Accuracy: 0.0116
- Validation Wermet: 0.8400
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
96abhishekarora/lt-kn-en_familyname-linkage
|
96abhishekarora
| 2023-09-04T20:22:33Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"linktransformer",
"sentence-similarity",
"tabular-classification",
"kn",
"en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-01T02:58:31Z |
---
pipeline_tag: sentence-similarity
language:
- kn
- en
tags:
- linktransformer
- sentence-transformers
- sentence-similarity
- tabular-classification
---
# 96abhishekarora/lt-kn-en_familyname-linkage
This is a [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) model. At its core this model this is a sentence transformer model [sentence-transformers](https://www.SBERT.net) model- it just wraps around the class.
It is designed for quick and easy record linkage (entity-matching) through the LinkTransformer package. The tasks include clustering, deduplication, linking, aggregation and more.
Notwithstanding that, it can be used for any sentence similarity task within the sentence-transformers framework as well.
It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Take a look at the documentation of [sentence-transformers](https://www.sbert.net/index.html) if you want to use this model for more than what we support in our applications.
This model has been fine-tuned on the model : bert-base-multilingual-cased. It is pretrained for the language : - kn
- en.
This model was trained on a dataset consisting of 12105132 people and their family id. 50% of the names are alo transliterated.
It was trained for 6 epochs using other defaults that can be found in the repo's LinkTransformer config file - LT_training_config.json
## Usage (LinkTransformer)
Using this model becomes easy when you have [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) installed:
```
pip install -U linktransformer
```
Then you can use the model like this:
```python
import linktransformer as lt
import pandas as pd
##Load the two dataframes that you want to link. For example, 2 dataframes with company names that are written differently
df1=pd.read_csv("data/df1.csv") ###This is the left dataframe with key CompanyName for instance
df2=pd.read_csv("data/df2.csv") ###This is the right dataframe with key CompanyName for instance
###Merge the two dataframes on the key column!
df_merged = lt.merge(df1, df2, on="CompanyName", how="inner")
##Done! The merged dataframe has a column called "score" that contains the similarity score between the two company names
```
## Training your own LinkTransformer model
Any Sentence Transformers can be used as a backbone by simply adding a pooling layer. Any other transformer on HuggingFace can also be used by specifying the option add_pooling_layer==True
The model was trained using SupCon loss.
Usage can be found in the package docs.
The training config can be found in the repo with the name LT_training_config.json
To replicate the training, you can download the file and specify the path in the config_path argument of the training function. You can also override the config by specifying the training_args argument.
Here is an example.
```python
##Consider the example in the paper that has a dataset of Mexican products and their tariff codes from 1947 and 1948 and we want train a model to link the two tariff codes.
saved_model_path = train_model(
model_path="hiiamsid/sentence_similarity_spanish_es",
dataset_path=dataset_path,
left_col_names=["description47"],
right_col_names=['description48'],
left_id_name=['tariffcode47'],
right_id_name=['tariffcode48'],
log_wandb=False,
config_path=LINKAGE_CONFIG_PATH,
training_args={"num_epochs": 1}
)
```
You can also use this package for deduplication (clusters a df on the supplied key column). Merging a fine class (like product) to a coarse class (like HS code) is also possible.
Read our paper and the documentation for more!
## Evaluation Results
<!--- Describe how your model was evaluated -->
You can evaluate the model using the [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) package's inference functions.
We have provided a few datasets in the package for you to try out. We plan to host more datasets on Huggingface and our website (Coming soon) that you can take a look at.
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 186000 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`linktransformer.modified_sbert.losses.SupConLoss_wandb`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 18600,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1116000,
"weight_decay": 0.01
}
```
LinkTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0005
|
bigmorning
| 2023-09-04T20:13:43Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:13:37Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2479
- Train Accuracy: 0.0119
- Train Wermet: 0.8226
- Validation Loss: 3.6101
- Validation Accuracy: 0.0109
- Validation Wermet: 0.8696
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nrshoudi/wav2vec2-large-xls-r-300m-Arabic-phoneme-Experiment3
|
nrshoudi
| 2023-09-04T20:12:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-02T18:59:49Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-Arabic-phoneme-Experiment3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Arabic-phoneme-Experiment3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0054
- Per: 0.0179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0415 | 1.0 | 102 | 3.1505 | 1.0 |
| 3.1122 | 2.0 | 204 | 3.0968 | 1.0 |
| 3.0896 | 3.0 | 306 | 3.0042 | 1.0 |
| 2.953 | 4.0 | 408 | 2.8056 | 1.0 |
| 2.444 | 5.0 | 510 | 1.7510 | 0.6859 |
| 1.1295 | 6.0 | 612 | 0.3595 | 0.1672 |
| 0.3895 | 7.0 | 714 | 0.1119 | 0.0782 |
| 0.2175 | 8.0 | 816 | 0.0607 | 0.0477 |
| 0.1353 | 9.0 | 918 | 0.0317 | 0.0247 |
| 0.0946 | 10.0 | 1020 | 0.0288 | 0.0286 |
| 0.0835 | 11.0 | 1122 | 0.0291 | 0.0370 |
| 0.0673 | 12.0 | 1224 | 0.0231 | 0.0261 |
| 0.0583 | 13.0 | 1326 | 0.0181 | 0.0199 |
| 0.0444 | 14.0 | 1428 | 0.0190 | 0.0266 |
| 0.041 | 15.0 | 1530 | 0.0134 | 0.0254 |
| 0.0357 | 16.0 | 1632 | 0.0122 | 0.0233 |
| 0.0301 | 17.0 | 1734 | 0.0098 | 0.0338 |
| 0.0265 | 18.0 | 1836 | 0.0099 | 0.0241 |
| 0.0241 | 19.0 | 1938 | 0.0098 | 0.0207 |
| 0.0258 | 20.0 | 2040 | 0.0089 | 0.0203 |
| 0.0234 | 21.0 | 2142 | 0.0104 | 0.0242 |
| 0.0193 | 22.0 | 2244 | 0.0126 | 0.0301 |
| 0.0186 | 23.0 | 2346 | 0.0073 | 0.0274 |
| 0.0155 | 24.0 | 2448 | 0.0073 | 0.0238 |
| 0.0143 | 25.0 | 2550 | 0.0056 | 0.0164 |
| 0.0134 | 26.0 | 2652 | 0.0060 | 0.0199 |
| 0.012 | 27.0 | 2754 | 0.0052 | 0.0206 |
| 0.0122 | 28.0 | 2856 | 0.0054 | 0.0179 |
| 0.0105 | 29.0 | 2958 | 0.0055 | 0.0188 |
| 0.0098 | 30.0 | 3060 | 0.0054 | 0.0179 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
actionpace/Huginn-13b-V4
|
actionpace
| 2023-09-04T20:09:48Z | 0 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T19:31:44Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Huginn-13b-V4_Q5_1_4K.gguf
* Huginn-13b-V4_Q5_1_8K.gguf
**Source:** [The-Face-Of-Goonery](https://huggingface.co/The-Face-Of-Goonery)
**Source Model:** [Huginn-13b-V4](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-V4)
**Source models for The-Face-Of-Goonery/Huginn-13b-V4 (Merge)**
- [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) ([Ref](https://huggingface.co/actionpace/Huginn-v3-13b))
- [Sao10K/Mythical-Destroyer-L2-13B](https://huggingface.co/Sao10K/Mythical-Destroyer-L2-13B) ([Ref](https://huggingface.co/actionpace/Mythical-Destroyer-L2-13B))
|
ushnahabbasi99/whisper-small-dv
|
ushnahabbasi99
| 2023-09-04T19:57:31Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-28T13:11:03Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Ushnah Abbasi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.72733595298536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Ushnah Abbasi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
- Wer Ortho: 62.0238
- Wer: 12.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
venetis/distilbert-base-uncased-finetuned-3d-sentiment
|
venetis
| 2023-09-04T19:52:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T16:12:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-3d-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-3d-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6641
- Accuracy: 0.7366
- Precision: 0.7377
- Recall: 0.7366
- F1: 0.7364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 12762
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8078 | 1.0 | 3190 | 0.8133 | 0.6628 | 0.6885 | 0.6628 | 0.6607 |
| 0.6227 | 2.0 | 6380 | 0.7637 | 0.6855 | 0.7103 | 0.6855 | 0.6849 |
| 0.5431 | 3.0 | 9570 | 0.6889 | 0.7047 | 0.7201 | 0.7047 | 0.7017 |
| 0.4585 | 4.0 | 12760 | 0.6641 | 0.7366 | 0.7377 | 0.7366 | 0.7364 |
| 0.3455 | 5.0 | 15950 | 0.8322 | 0.7203 | 0.7323 | 0.7203 | 0.7187 |
| 0.223 | 6.0 | 19140 | 0.9541 | 0.7205 | 0.7316 | 0.7205 | 0.7204 |
| 0.145 | 7.0 | 22330 | 1.1726 | 0.7196 | 0.7305 | 0.7196 | 0.7200 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Jana1994/wav2vec2-large-xls-r-300m-jana-colab
|
Jana1994
| 2023-09-04T19:51:58Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-31T08:26:49Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-jana-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: cy
split: test
args: cy
metrics:
- name: Wer
type: wer
value: 0.6497412901000345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-jana-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8913
- Wer: 0.6497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6444 | 1.67 | 200 | 2.9379 | 1.0 |
| 2.7964 | 3.33 | 400 | 1.9912 | 0.9927 |
| 1.1945 | 5.0 | 600 | 0.9492 | 0.7889 |
| 0.6065 | 6.67 | 800 | 0.8534 | 0.7137 |
| 0.3859 | 8.33 | 1000 | 0.8933 | 0.6689 |
| 0.2724 | 10.0 | 1200 | 0.8913 | 0.6497 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/Huginn-13b-v1.2
|
actionpace
| 2023-09-04T19:51:30Z | 23 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T19:18:10Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Huginn-13b-v1.2_Q5_1_4K.gguf
* Huginn-13b-v1.2_Q5_1_8K.gguf
**Source:** [The-Face-Of-Goonery](https://huggingface.co/The-Face-Of-Goonery)
**Source Model:** [Huginn-13b-v1.2](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v1.2)
**Source models for The-Face-Of-Goonery/Huginn-13b-v1.2 (Merge)**
- [elinas/chronos-13b](https://huggingface.co/elinas/chronos-13b) ([Ref](https://huggingface.co/actionpace/chronos-13b))
- [jondurbin/airoboros-l2-13b-gpt4-1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [stabilityai/StableBeluga-13B](https://huggingface.co/stabilityai/StableBeluga-13B)
- [Gryphe/MythoLogic-L2-13b](https://huggingface.co/Gryphe/MythoLogic-L2-13b)
- [The-Face-Of-Goonery/LegerDemain-FP16](https://huggingface.co/The-Face-Of-Goonery/LegerDemain-FP16)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
**Models utilizing The-Face-Of-Goonery/Huginn-13b-v1.2**
- [Undi95/UndiMix-v1-13b](https://huggingface.co/Undi95/UndiMix-v1-13b) ([Ref](https://huggingface.co/actionpace/UndiMix-v1-13b)) (Merge)
- [Undi95/ReMM-L2-13B](https://huggingface.co/Undi95/ReMM-L2-13B) ([Ref](https://huggingface.co/actionpace/ReMM-L2-13B)) (Merge)
|
nbogdan/flant5-small-2ex-bridging-1epochs
|
nbogdan
| 2023-09-04T19:49:58Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T19:49:49Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-small-2ex-bridging-1epochs` for google/flan-t5-small
An [adapter](https://adapterhub.ml) for the `google/flan-t5-small` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-small")
adapter_name = model.load_adapter("nbogdan/flant5-small-2ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
dmatekenya/wav2vec2-large-xls-r-300m-chichewa
|
dmatekenya
| 2023-09-04T19:47:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T17:49:52Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-chichewa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chichewa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2028 | 3.51 | 400 | inf | 0.9999 |
| 2.5353 | 7.02 | 800 | inf | 0.9743 |
| 1.8464 | 10.53 | 1200 | inf | 0.9777 |
| 1.6672 | 14.04 | 1600 | inf | 0.9669 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-xxl-0ex-overall-1epochs
|
nbogdan
| 2023-09-04T19:45:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T19:44:19Z |
---
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-xxl-0ex-overall-1epochs` for google/flan-t5-xxl
An [adapter](https://adapterhub.ml) for the `google/flan-t5-xxl` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-xxl")
adapter_name = model.load_adapter("nbogdan/flant5-xxl-0ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
jorgeortizfuentes/chilean-spanish-hate-speech
|
jorgeortizfuentes
| 2023-09-04T19:42:36Z | 110 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"es",
"dataset:jorgeortizfuentes/toxicity_spanish_hate_speech_v2",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T19:35:56Z |
---
language:
- es
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- jorgeortizfuentes/toxicity_spanish_hate_speech_v2
metrics:
- f1
model-index:
- name: hate_speech-dv2-patana-chilean-spanish-bert-8k0iqdv2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: jorgeortizfuentes/toxicity_spanish_hate_speech_v2
type: jorgeortizfuentes/toxicity_spanish_hate_speech_v2
split: validation
metrics:
- name: F1
type: f1
value: 0.8160919540229885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_speech-dv2-patana-chilean-spanish-bert-8k0iqdv2
This model is a fine-tuned version of [dccuchile/patana-chilean-spanish-bert](https://huggingface.co/dccuchile/patana-chilean-spanish-bert) on the jorgeortizfuentes/toxicity_spanish_hate_speech_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- F1: 0.8161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0528 | 5.0 | 430 | 0.1698 | 0.7376 |
| 0.003 | 10.0 | 860 | 0.1628 | 0.8161 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.