modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 00:47:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 00:46:31
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gmongaras/Wizard_7B_Reddit_Political_2019_8bit
|
gmongaras
| 2023-09-11T18:38:54Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-09-10T15:26:44Z |
---
license: openrail
---
Model from: https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main
Trained on: https://huggingface.co/datasets/gmongaras/reddit_political_2019
For about 6000 steps with a batch sise of 8, 2 accumulation steps, and using LoRA adapters on all layers.
|
Eitanli/distilbert-qa-checkpoint-v5
|
Eitanli
| 2023-09-11T18:19:46Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-13T13:25:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: distilbert-qa-checkpoint-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-qa-checkpoint-v5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3912 | 1.0 | 2059 | 0.3897 |
| 0.3313 | 2.0 | 4118 | 0.3449 |
| 0.2679 | 3.0 | 6177 | 0.3508 |
| 0.2323 | 4.0 | 8236 | 0.3489 |
| 0.2047 | 5.0 | 10295 | 0.3578 |
| 0.1913 | 6.0 | 12354 | 0.4529 |
| 0.1821 | 7.0 | 14413 | 0.4904 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
adeep028/bert-fine-tuned-cola
|
adeep028
| 2023-09-11T18:10:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-11T17:44:14Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6118771035334829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7565
- Matthews Correlation: 0.6119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4374 | 1.0 | 1069 | 0.4163 | 0.5558 |
| 0.3114 | 2.0 | 2138 | 0.6548 | 0.6006 |
| 0.1875 | 3.0 | 3207 | 0.7565 | 0.6119 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rmpmalheiro/taxi-v3
|
rmpmalheiro
| 2023-09-11T18:03:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T18:03:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rmpmalheiro/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
reza93v/distilbert-base-uncased-finetuned-imdb
|
reza93v
| 2023-09-11T17:58:02Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-11T17:06:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3546 | 1.0 | 13 | 2.2305 |
| 2.3243 | 2.0 | 26 | 2.2225 |
| 2.243 | 3.0 | 39 | 2.1640 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
rmpmalheiro/q-FrozenLake-v1-4x4-noSlippery
|
rmpmalheiro
| 2023-09-11T17:56:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T17:56:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rmpmalheiro/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jjmcarrascosa/vit_receipts_classifier
|
jjmcarrascosa
| 2023-09-11T17:47:19Z | 236 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-26T18:57:00Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- f1
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit_receipts_classifier
results: []
---
# vit_receipts_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cord, rvl-cdip, visual-genome and an external receipt dataset to carry out Binary Classification (`ticket` vs `no_ticket`).
Ticket here is used as a synonym to "receipt".
It achieves the following results on the evaluation set, which contain pictures from the above datasets in scanned, photography or mobile picture formats (color and grayscale):
- Loss: 0.0116
- F1: 0.9991
## Model description
This model is a Binary Classifier finetuned version of ViT, to predict if an input image is a picture / scan of receipts(s) o something else.
## Intended uses & limitations
Use this model to classify your images into tickets or not tickers. WIth the tickets group, you can use Multimodal Information Extraction, as Visual Named Entity Recognition, to extract the ticket items, amounts, total, etc. Check the Cord dataset for more information.
## Training and evaluation data
This model used 2 datasets as positive class (`ticket`):
- `cord`
- `https://expressexpense.com/blog/free-receipt-images-ocr-machine-learning-dataset/`
For the negative class (`no_ticket`), the following datasets were used:
- A subset of `RVL-CDIP`
- A subset of `visual-genome`
## Training procedure
Datasets were loaded with different distributions of data for positive and negative classes. Then, normalization and resizing is carried out to adapt it to ViT expected input.
Different runs were carried out changing the data distribution and the hyperparameters to maximize F1.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0026 | 0.28 | 500 | 0.0187 | 0.9982 |
| 0.0186 | 0.56 | 1000 | 0.0116 | 0.9991 |
| 0.0006 | 0.84 | 1500 | 0.0044 | 0.9997 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
emre/switch-base-8-finetuned-samsum
|
emre
| 2023-09-11T17:45:14Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-8",
"base_model:finetune:google/switch-base-8",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-18T16:50:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
base_model: google/switch-base-8
model-index:
- name: switch-base-8-finetuned-samsum
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- type: rouge
value: 46.5651
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-8-finetuned-samsum
This model is a fine-tuned version of [google/switch-base-8](https://huggingface.co/google/switch-base-8) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4606
- Rouge1: 46.5651
- Rouge2: 23.2378
- Rougel: 39.4484
- Rougelsum: 43.1011
- Gen Len: 17.0183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8829 | 1.0 | 3683 | 1.5154 | 46.3805 | 23.0982 | 39.0612 | 43.0142 | 17.6296 |
| 1.6207 | 2.0 | 7366 | 1.4578 | 47.7434 | 24.9471 | 40.6481 | 44.351 | 17.2066 |
| 1.442 | 3.0 | 11049 | 1.4360 | 47.6903 | 24.9954 | 40.713 | 44.3487 | 17.0501 |
| 1.3103 | 4.0 | 14732 | 1.4396 | 48.4517 | 25.7725 | 41.5212 | 45.1211 | 16.9071 |
| 1.2393 | 5.0 | 18415 | 1.4445 | 48.4002 | 25.8727 | 41.5361 | 45.0467 | 16.9804 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gauravvaid/codeparrot-ds
|
gauravvaid
| 2023-09-11T17:34:51Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-06T12:27:42Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
liadraz/CleanRl-PPO-U8-CartPole
|
liadraz
| 2023-09-11T17:31:04Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T17:30:51Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 439.90 +/- 100.52
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'PPOCleanRL'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ThomasSimonini/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
tommyadams/finetuned_falconb6
|
tommyadams
| 2023-09-11T17:28:55Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-step-50K-105b",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-step-50K-105b",
"license:apache-2.0",
"region:us"
] | null | 2023-09-10T22:00:12Z |
---
license: apache-2.0
base_model: PY007/TinyLlama-1.1B-step-50K-105b
tags:
- generated_from_trainer
model-index:
- name: finetuned_falconb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_falconb6
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-step-50K-105b](https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 3
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Koltunov-Matthew/my_bart_model
|
Koltunov-Matthew
| 2023-09-11T17:23:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-08T07:43:47Z |
---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_bart_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_bart_model
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8325
- Rouge1: 0.3004
- Rouge2: 0.1539
- Rougel: 0.244
- Rougelsum: 0.2441
- Gen Len: 59.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.6223 | 1.0 | 27000 | 1.8325 | 0.3004 | 0.1539 | 0.244 | 0.2441 | 59.9356 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_syl_wd_0_lr_en2_0010
|
bigmorning
| 2023-09-11T17:15:58Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-11T17:15:49Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0_lr_en2_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0_lr_en2_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.8685
- Train Accuracy: 0.0113
- Train Wermet: 0.9890
- Train Wermet Syl: 0.9897
- Validation Loss: 4.1857
- Validation Accuracy: 0.0113
- Validation Wermet: 0.9851
- Validation Wermet Syl: 0.9843
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.01, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 39.6121 | 0.0057 | 33.2649 | 25.5768 | 4.5339 | 0.0113 | 0.9851 | 0.9843 | 0 |
| 5.3698 | 0.0107 | 12.0116 | 9.0545 | 4.3408 | 0.0112 | 0.9919 | 0.9915 | 1 |
| 5.1979 | 0.0109 | 9.4008 | 7.1909 | 4.2108 | 0.0113 | 0.9851 | 0.9843 | 2 |
| 5.0669 | 0.0110 | 7.0382 | 5.3339 | 4.1662 | 0.0113 | 0.9851 | 0.9843 | 3 |
| 4.9546 | 0.0111 | 4.8506 | 3.7351 | 4.3022 | 0.0112 | 0.9870 | 0.9854 | 4 |
| 4.9453 | 0.0111 | 3.9228 | 3.1750 | 4.1194 | 0.0113 | 0.9851 | 0.9843 | 5 |
| 4.9123 | 0.0112 | 2.2402 | 1.9643 | 4.1865 | 0.0112 | 1.0000 | 1.0000 | 6 |
| 4.8957 | 0.0112 | 1.7673 | 1.5892 | 4.1150 | 0.0112 | 1.0000 | 0.9999 | 7 |
| 4.8959 | 0.0112 | 2.2166 | 1.9601 | 4.1185 | 0.0113 | 0.9851 | 0.9843 | 8 |
| 4.8685 | 0.0113 | 0.9890 | 0.9897 | 4.1857 | 0.0113 | 0.9851 | 0.9843 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
esrcse/llama2-qlora-finetunined-french
|
esrcse
| 2023-09-11T17:05:56Z | 13 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-11T17:05:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
hanlforever/xlm-roberta-base-finetuned-panx-all
|
hanlforever
| 2023-09-11T17:03:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T16:05:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1416
- F1: 0.8615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2723 | 1.0 | 525 | 0.1684 | 0.8139 |
| 0.125 | 2.0 | 1050 | 0.1379 | 0.8538 |
| 0.0783 | 3.0 | 1575 | 0.1416 | 0.8615 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.11.0
|
bigmorning/whisper_4_with_init_sun_syl_wd_0_lr_en2_0005
|
bigmorning
| 2023-09-11T17:00:59Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-11T17:00:51Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0_lr_en2_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0_lr_en2_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.9546
- Train Accuracy: 0.0111
- Train Wermet: 4.8506
- Train Wermet Syl: 3.7351
- Validation Loss: 4.3022
- Validation Accuracy: 0.0112
- Validation Wermet: 0.9870
- Validation Wermet Syl: 0.9854
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.01, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 39.6121 | 0.0057 | 33.2649 | 25.5768 | 4.5339 | 0.0113 | 0.9851 | 0.9843 | 0 |
| 5.3698 | 0.0107 | 12.0116 | 9.0545 | 4.3408 | 0.0112 | 0.9919 | 0.9915 | 1 |
| 5.1979 | 0.0109 | 9.4008 | 7.1909 | 4.2108 | 0.0113 | 0.9851 | 0.9843 | 2 |
| 5.0669 | 0.0110 | 7.0382 | 5.3339 | 4.1662 | 0.0113 | 0.9851 | 0.9843 | 3 |
| 4.9546 | 0.0111 | 4.8506 | 3.7351 | 4.3022 | 0.0112 | 0.9870 | 0.9854 | 4 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
turing-motors/heron-chat-git-ja-stablelm-base-7b-v0
|
turing-motors
| 2023-09-11T16:55:23Z | 31 | 2 |
transformers
|
[
"transformers",
"pytorch",
"git_japanese_stablelm_alpha",
"text-generation",
"heron",
"vision",
"image-captioning",
"VQA",
"image-to-text",
"custom_code",
"ja",
"arxiv:2205.14100",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2023-09-06T09:19:59Z |
---
language:
- ja
tags:
- heron
- vision
- image-captioning
- VQA
pipeline_tag: image-to-text
license:
- cc-by-nc-4.0
inference: false
---
# Heron GIT Japanese StableLM Base 7B

## Model Details
Heron GIT Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository).
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor
from heron.models.git_llm.git_japanese_stablelm_alpha import GitJapaneseStableLMAlphaForCausalLM
device_id = 0
# prepare a pretrained model
model = GitJapaneseStableLMAlphaForCausalLM.from_pretrained(
'turing-motors/heron-chat-git-ja-stablelm-base-7b-v0', torch_dtype=torch.float16
)
model.eval()
model.to(f"cuda:{device_id}")
# prepare a processor
processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0')
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: これは何の写真ですか?\n##gpt: "
# do preprocessing
inputs = processor(
text,
image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
# print result
print(processor.tokenizer.batch_decode(out)[0])
```
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)
* **Language(s)**: Japanese
### Training
This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome using LoRA.
### Training Dataset
- [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)
- [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitJapaneseStableLM,
url = {[https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v0)},
title = {Heron GIT Japanese StableLM Base 7B},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{JapaneseInstructBLIPAlpha,
url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
title = {Japanese InstructBLIP Alpha},
author = {Shing, Makoto and Akiba, Takuya}
}
```
---
license: cc-by-nc-4.0
---
|
turing-motors/heron-chat-git-Llama-2-7b-v0
|
turing-motors
| 2023-09-11T16:53:31Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"git_llama",
"text-generation",
"heron",
"vision",
"image-captioning",
"VQA",
"image-to-text",
"en",
"arxiv:2205.14100",
"arxiv:2307.09288",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2023-09-07T10:55:05Z |
---
language:
- en
tags:
- heron
- vision
- image-captioning
- VQA
pipeline_tag: image-to-text
license:
- cc-by-nc-4.0
inference: false
---
# Heron GIT Llama 2 Fast 7B

## Model Details
Heron GIT Llama 2 7B is a vision-language model that can converse about input images.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository).
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor
from heron.models.git_llm.git_llama import GitLlamaConfig, GitLlamaForCausalLM
device_id = 0
# prepare a pretrained model
model = GitLlamaForCausalLM.from_pretrained(
'turing-motors/heron-chat-git-Llama-2-7b-v0', torch_dtype=torch.float16
)
model.eval()
model.to(f"cuda:{device_id}")
# prepare a processor
processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-Llama-2-7b-v0')
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: What is this picture?\n##gpt: "
# do preprocessing
inputs = processor(
text,
image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
# print result
print(processor.tokenizer.batch_decode(out)[0])
```
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
* **Language(s)**: English
### Training
This model was initially trained with the Adaptor using Coco Captions in M3IT. In the second phase, it was fine-tuned with M3IT. Finally, it was trained by instruction tuning with LLaVA-Instruct-150K.
### Training Dataset
- [LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K)
- [M3IT](https://huggingface.co/datasets/MMInstruction/M3IT)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitLlama2,
url = {[https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0)},
title = {Heron GIT Llama 2 7B},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
license: cc-by-nc-4.0
---
|
thezeivier/test_grietas_100
|
thezeivier
| 2023-09-11T16:50:20Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-11T16:26:21Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_grietas_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_grietas_100
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0018
- Accuracy: 0.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.1055 | 0.3 |
| No log | 2.0 | 3 | 1.0141 | 0.6333 |
| No log | 3.0 | 5 | 1.0018 | 0.5833 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Prot10/convnextv2-base-1k-224-for-pre_evaluation
|
Prot10
| 2023-09-11T16:38:01Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-base-1k-224",
"base_model:finetune:facebook/convnextv2-base-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-30T12:27:48Z |
---
license: apache-2.0
base_model: facebook/convnextv2-base-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnextv2-base-1k-224-for-pre_evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-1k-224-for-pre_evaluation
This model is a fine-tuned version of [facebook/convnextv2-base-1k-224](https://huggingface.co/facebook/convnextv2-base-1k-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3599
- Accuracy: 0.4190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6 | 1.0 | 16 | 1.5316 | 0.2961 |
| 1.5084 | 2.0 | 32 | 1.5061 | 0.2849 |
| 1.5134 | 3.0 | 48 | 1.4968 | 0.3240 |
| 1.4663 | 4.0 | 64 | 1.4607 | 0.3352 |
| 1.4046 | 5.0 | 80 | 1.4509 | 0.3268 |
| 1.4085 | 6.0 | 96 | 1.4423 | 0.3883 |
| 1.3443 | 7.0 | 112 | 1.4005 | 0.4022 |
| 1.3025 | 8.0 | 128 | 1.3599 | 0.4190 |
| 1.2627 | 9.0 | 144 | 1.3638 | 0.3911 |
| 1.2099 | 10.0 | 160 | 1.4058 | 0.3492 |
| 1.2086 | 11.0 | 176 | 1.4431 | 0.3408 |
| 1.1393 | 12.0 | 192 | 1.4143 | 0.3492 |
| 1.1039 | 13.0 | 208 | 1.4305 | 0.3883 |
| 1.0551 | 14.0 | 224 | 1.5203 | 0.3520 |
| 1.0368 | 15.0 | 240 | 1.5117 | 0.3324 |
| 0.9753 | 16.0 | 256 | 1.4545 | 0.3771 |
| 0.938 | 17.0 | 272 | 1.5396 | 0.3352 |
| 0.899 | 18.0 | 288 | 1.5770 | 0.3408 |
| 0.8629 | 19.0 | 304 | 1.7106 | 0.3128 |
| 0.8674 | 20.0 | 320 | 1.5864 | 0.3352 |
| 0.7789 | 21.0 | 336 | 1.6129 | 0.3408 |
| 0.7426 | 22.0 | 352 | 1.6353 | 0.3603 |
| 0.7677 | 23.0 | 368 | 1.6793 | 0.3464 |
| 0.7172 | 24.0 | 384 | 1.6759 | 0.3575 |
| 0.6809 | 25.0 | 400 | 1.7013 | 0.3659 |
| 0.6619 | 26.0 | 416 | 1.7108 | 0.3631 |
| 0.6656 | 27.0 | 432 | 1.7327 | 0.3715 |
| 0.6258 | 28.0 | 448 | 1.7378 | 0.3547 |
| 0.6173 | 29.0 | 464 | 1.7461 | 0.3603 |
| 0.6214 | 30.0 | 480 | 1.7475 | 0.3520 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
iven5880/distilbert-base-uncased-finetuned-imdb
|
iven5880
| 2023-09-11T16:34:41Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-08T01:39:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6985 | 1.0 | 157 | 2.5612 |
| 2.562 | 2.0 | 314 | 2.4226 |
| 2.5316 | 3.0 | 471 | 2.4218 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.2
|
MartinFLL/ai-voices
|
MartinFLL
| 2023-09-11T16:29:24Z | 0 | 2 | null |
[
"license:other",
"region:us"
] | null | 2023-07-01T01:27:36Z |
---
license: other
---
This repository contains all the AI voices I've trained using RVC v2.
All were trained using my GeForce NVIDIA RTX 3060 Ti.
If you use any of these please credit me, although it's not necessary. I would love to see what you make with these models.
You can find more info on these models (and more) on the AI HUB discord server. https://discord.gg/aihub
|
gmurro/bart-large-finetuned-filtered-spotify-podcast-summ
|
gmurro
| 2023-09-11T16:26:07Z | 687 | 11 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"arxiv:2004.04270",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-16T16:04:16Z |
---
license: mit
tags:
- generated_from_keras_callback
base_model: facebook/bart-large-cnn
model-index:
- name: bart-large-finetuned-filtered-spotify-podcast-summ
results: []
---
# bart-large-finetuned-filtered-spotify-podcast-summ
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on on the [Spotify Podcast Dataset](https://arxiv.org/abs/2004.04270). Take a look to the [github repository](https://github.com/TheOnesThatWereAbroad/PodcastSummarization) of this project.
It achieves the following results during training:
- Train Loss: 2.2967
- Validation Loss: 2.8316
- Epoch: 2
## Intended uses & limitations
This model is intended to be used for automatic podcast summarisation. Given the podcast transcript in input, the objective is to provide a short text summary that a user might read when deciding whether to listen to a podcast. The summary should accurately convey the content of the podcast, be human-readable, and be short enough to be quickly read on a smartphone screen.
## Training and evaluation data
In our solution, an extractive module is developed to select salient chunks from the transcript, which serve as the input to an abstractive summarizer.
An extensive pre-processing on the creator-provided descriptions is performed selecting a subset of the corpus that is suitable for the training supervised model.
We split the filtered dataset into train/dev sets of 69,336/7,705 episodes.
The test set consists of 1,027 episodes. Only 1025 have been used because two of them did not contain an episode description.
## How to use
The model can be used for the summarization as follows:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="gmurro/bart-large-finetuned-filtered-spotify-podcast-summ", tokenizer="gmurro/bart-large-finetuned-filtered-spotify-podcast-summ")
summary = summarizer(podcast_transcript, min_length=39, max_length=250)
print(summary[0]['summary_text'])
```
### Training hyperparameters
The following hyperparameters were used during training:
- ```python
optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
```
- ```python
training_precision: float32
```
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0440 | 2.8733 | 0 |
| 2.6085 | 2.8549 | 1 |
| 2.2967 | 2.8316 | 2 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.9.1
- Datasets 2.3.1
- Tokenizers 0.12.1
## Authors
| Name | Surname | Email | Username |
| :-------: | :-------: | :------------------------------------: | :---------------------------------------------------: |
| Giuseppe | Boezio | `[email protected]` | [_giuseppeboezio_](https://github.com/giuseppeboezio) |
| Simone | Montali | `[email protected]` | [_montali_](https://github.com/montali) |
| Giuseppe | Murro | `[email protected]` | [_gmurro_](https://github.com/gmurro) |
|
ldos/text_shortening_model_v31
|
ldos
| 2023-09-11T16:05:54Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-11T15:08:02Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v31
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7416
- Rouge1: 0.4961
- Rouge2: 0.2712
- Rougel: 0.4388
- Rougelsum: 0.4386
- Bert precision: 0.8749
- Bert recall: 0.8711
- Average word count: 8.5135
- Max word count: 16
- Min word count: 3
- Average token count: 13.1592
- % shortened texts with length > 12: 10.2102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.1978 | 1.0 | 145 | 1.5250 | 0.4953 | 0.2842 | 0.4528 | 0.4524 | 0.8806 | 0.8681 | 7.8919 | 18 | 3 | 12.4234 | 4.2042 |
| 1.0092 | 2.0 | 290 | 1.4421 | 0.5257 | 0.3053 | 0.4698 | 0.4689 | 0.875 | 0.8809 | 9.6006 | 18 | 4 | 14.3574 | 19.2192 |
| 0.8932 | 3.0 | 435 | 1.4060 | 0.5266 | 0.3045 | 0.4728 | 0.472 | 0.8766 | 0.8776 | 9.0841 | 18 | 4 | 13.6366 | 14.7147 |
| 0.79 | 4.0 | 580 | 1.4022 | 0.5329 | 0.3136 | 0.4714 | 0.4714 | 0.8802 | 0.8797 | 8.952 | 16 | 4 | 13.6036 | 12.9129 |
| 0.7506 | 5.0 | 725 | 1.4514 | 0.5145 | 0.2935 | 0.4485 | 0.4485 | 0.8745 | 0.8726 | 8.97 | 18 | 4 | 13.6096 | 12.012 |
| 0.6981 | 6.0 | 870 | 1.4602 | 0.5146 | 0.2914 | 0.4566 | 0.4559 | 0.8778 | 0.8762 | 8.958 | 18 | 3 | 13.5195 | 15.3153 |
| 0.6426 | 7.0 | 1015 | 1.4745 | 0.5196 | 0.2973 | 0.4596 | 0.4593 | 0.8759 | 0.8788 | 9.1802 | 16 | 4 | 13.9159 | 14.1141 |
| 0.6251 | 8.0 | 1160 | 1.5026 | 0.5217 | 0.2965 | 0.461 | 0.4611 | 0.8802 | 0.8775 | 8.8198 | 16 | 4 | 13.3393 | 12.012 |
| 0.5901 | 9.0 | 1305 | 1.5890 | 0.5156 | 0.2967 | 0.4606 | 0.4609 | 0.8773 | 0.876 | 8.7718 | 17 | 3 | 13.4655 | 9.6096 |
| 0.5544 | 10.0 | 1450 | 1.6294 | 0.5172 | 0.287 | 0.4562 | 0.4559 | 0.8779 | 0.876 | 8.7688 | 18 | 4 | 13.5195 | 11.7117 |
| 0.5354 | 11.0 | 1595 | 1.6805 | 0.5169 | 0.2871 | 0.457 | 0.4571 | 0.8768 | 0.8774 | 8.994 | 17 | 4 | 13.6486 | 14.1141 |
| 0.5103 | 12.0 | 1740 | 1.7334 | 0.5121 | 0.2824 | 0.4556 | 0.455 | 0.8785 | 0.8745 | 8.5465 | 16 | 3 | 13.1021 | 8.1081 |
| 0.4796 | 13.0 | 1885 | 1.7767 | 0.499 | 0.2763 | 0.442 | 0.4418 | 0.8754 | 0.8739 | 8.6396 | 17 | 4 | 13.3183 | 11.4114 |
| 0.4825 | 14.0 | 2030 | 1.8319 | 0.5114 | 0.2849 | 0.4497 | 0.4501 | 0.8746 | 0.8758 | 8.994 | 17 | 4 | 13.6667 | 12.9129 |
| 0.4572 | 15.0 | 2175 | 1.8613 | 0.5129 | 0.2884 | 0.4546 | 0.4549 | 0.8785 | 0.8757 | 8.6877 | 17 | 3 | 13.3784 | 10.5105 |
| 0.4489 | 16.0 | 2320 | 1.8790 | 0.5144 | 0.2829 | 0.4533 | 0.4536 | 0.8777 | 0.8754 | 8.8078 | 16 | 3 | 13.4955 | 13.2132 |
| 0.4211 | 17.0 | 2465 | 1.9604 | 0.4936 | 0.2641 | 0.4322 | 0.4326 | 0.8735 | 0.8696 | 8.4985 | 17 | 3 | 13.1892 | 9.009 |
| 0.4246 | 18.0 | 2610 | 2.0639 | 0.4951 | 0.2634 | 0.4331 | 0.4334 | 0.8721 | 0.8703 | 8.7538 | 16 | 4 | 13.3453 | 12.6126 |
| 0.4063 | 19.0 | 2755 | 2.0587 | 0.499 | 0.2685 | 0.4378 | 0.4383 | 0.8741 | 0.8707 | 8.5916 | 16 | 3 | 13.3003 | 9.9099 |
| 0.3912 | 20.0 | 2900 | 2.1089 | 0.5068 | 0.2727 | 0.4471 | 0.4469 | 0.8764 | 0.8744 | 8.7538 | 18 | 3 | 13.4625 | 11.1111 |
| 0.3855 | 21.0 | 3045 | 2.1048 | 0.5022 | 0.2704 | 0.4473 | 0.4478 | 0.875 | 0.8728 | 8.6847 | 16 | 4 | 13.3483 | 9.3093 |
| 0.3808 | 22.0 | 3190 | 2.1804 | 0.4977 | 0.2722 | 0.4414 | 0.4412 | 0.875 | 0.8711 | 8.5315 | 17 | 4 | 13.0631 | 10.8108 |
| 0.3851 | 23.0 | 3335 | 2.1740 | 0.4993 | 0.2696 | 0.4442 | 0.4443 | 0.8742 | 0.8719 | 8.5676 | 15 | 3 | 13.2252 | 9.009 |
| 0.3741 | 24.0 | 3480 | 2.1872 | 0.4921 | 0.2683 | 0.4365 | 0.4369 | 0.8728 | 0.8692 | 8.5195 | 17 | 3 | 13.2192 | 8.4084 |
| 0.3604 | 25.0 | 3625 | 2.2617 | 0.4988 | 0.2681 | 0.4421 | 0.4426 | 0.8747 | 0.8705 | 8.5255 | 17 | 3 | 13.2492 | 8.1081 |
| 0.3676 | 26.0 | 3770 | 2.2561 | 0.4931 | 0.2603 | 0.4328 | 0.4331 | 0.874 | 0.8711 | 8.6276 | 15 | 3 | 13.3363 | 11.7117 |
| 0.3799 | 27.0 | 3915 | 2.2404 | 0.4912 | 0.2652 | 0.4329 | 0.433 | 0.8729 | 0.8702 | 8.6517 | 17 | 3 | 13.4414 | 8.1081 |
| 0.3617 | 28.0 | 4060 | 2.2728 | 0.4983 | 0.2704 | 0.4424 | 0.4427 | 0.8756 | 0.8734 | 8.7568 | 17 | 3 | 13.5225 | 11.4114 |
| 0.3588 | 29.0 | 4205 | 2.2695 | 0.4904 | 0.2601 | 0.4331 | 0.4328 | 0.8743 | 0.87 | 8.4775 | 18 | 3 | 13.1592 | 9.009 |
| 0.3567 | 30.0 | 4350 | 2.3006 | 0.4993 | 0.2693 | 0.4419 | 0.4417 | 0.8747 | 0.8737 | 8.8529 | 17 | 3 | 13.5976 | 12.012 |
| 0.3573 | 31.0 | 4495 | 2.3257 | 0.4979 | 0.2669 | 0.4378 | 0.4379 | 0.8743 | 0.8735 | 8.9069 | 18 | 3 | 13.6697 | 12.9129 |
| 0.3471 | 32.0 | 4640 | 2.3513 | 0.4989 | 0.2723 | 0.441 | 0.4405 | 0.8758 | 0.8728 | 8.6246 | 17 | 3 | 13.3063 | 10.8108 |
| 0.3591 | 33.0 | 4785 | 2.3467 | 0.4972 | 0.2751 | 0.4415 | 0.4413 | 0.8742 | 0.8727 | 8.8078 | 17 | 3 | 13.5616 | 10.5105 |
| 0.3401 | 34.0 | 4930 | 2.4229 | 0.4854 | 0.2661 | 0.4313 | 0.4318 | 0.8737 | 0.8701 | 8.5826 | 17 | 3 | 13.2673 | 8.7087 |
| 0.3476 | 35.0 | 5075 | 2.3804 | 0.4895 | 0.2602 | 0.4322 | 0.4326 | 0.874 | 0.8712 | 8.6577 | 17 | 3 | 13.2883 | 9.3093 |
| 0.3473 | 36.0 | 5220 | 2.4242 | 0.4938 | 0.2689 | 0.438 | 0.4387 | 0.8745 | 0.8713 | 8.5976 | 17 | 3 | 13.2432 | 9.3093 |
| 0.3415 | 37.0 | 5365 | 2.3836 | 0.4943 | 0.2617 | 0.4351 | 0.4351 | 0.8751 | 0.8711 | 8.4054 | 17 | 3 | 13.0571 | 8.1081 |
| 0.3549 | 38.0 | 5510 | 2.4110 | 0.501 | 0.2696 | 0.4402 | 0.4406 | 0.8765 | 0.8713 | 8.2282 | 17 | 3 | 12.9459 | 6.6066 |
| 0.3432 | 39.0 | 5655 | 2.4016 | 0.4999 | 0.27 | 0.4387 | 0.4393 | 0.8751 | 0.8712 | 8.5285 | 17 | 3 | 13.2402 | 8.4084 |
| 0.3387 | 40.0 | 5800 | 2.4546 | 0.4986 | 0.2718 | 0.4417 | 0.4422 | 0.8742 | 0.871 | 8.5766 | 17 | 3 | 13.2312 | 9.3093 |
| 0.3351 | 41.0 | 5945 | 2.4478 | 0.4981 | 0.2714 | 0.4367 | 0.4372 | 0.8756 | 0.8722 | 8.4775 | 15 | 3 | 13.1411 | 8.7087 |
| 0.3366 | 42.0 | 6090 | 2.4447 | 0.4961 | 0.2703 | 0.4359 | 0.437 | 0.8746 | 0.8699 | 8.4745 | 16 | 3 | 13.1231 | 9.3093 |
| 0.3324 | 43.0 | 6235 | 2.4974 | 0.4989 | 0.2809 | 0.4428 | 0.4432 | 0.8747 | 0.873 | 8.7147 | 16 | 3 | 13.4565 | 10.5105 |
| 0.3306 | 44.0 | 6380 | 2.4938 | 0.4902 | 0.2657 | 0.4301 | 0.4306 | 0.8733 | 0.8692 | 8.4925 | 15 | 3 | 13.1622 | 8.4084 |
| 0.3388 | 45.0 | 6525 | 2.5098 | 0.4788 | 0.2616 | 0.4246 | 0.4245 | 0.8734 | 0.8662 | 8.2162 | 16 | 3 | 12.7538 | 8.1081 |
| 0.346 | 46.0 | 6670 | 2.4595 | 0.4987 | 0.2689 | 0.438 | 0.4389 | 0.875 | 0.8718 | 8.5676 | 16 | 3 | 13.2252 | 9.9099 |
| 0.3401 | 47.0 | 6815 | 2.5098 | 0.4934 | 0.2653 | 0.4353 | 0.4356 | 0.8744 | 0.87 | 8.3934 | 15 | 3 | 13.048 | 8.1081 |
| 0.3271 | 48.0 | 6960 | 2.5204 | 0.4951 | 0.2674 | 0.4373 | 0.4372 | 0.8749 | 0.8703 | 8.4625 | 16 | 3 | 13.024 | 9.009 |
| 0.3267 | 49.0 | 7105 | 2.5291 | 0.4887 | 0.2605 | 0.428 | 0.4284 | 0.8728 | 0.8702 | 8.7057 | 18 | 3 | 13.3363 | 11.1111 |
| 0.3382 | 50.0 | 7250 | 2.5422 | 0.4899 | 0.2666 | 0.4354 | 0.4356 | 0.8755 | 0.8707 | 8.4505 | 16 | 3 | 13.0931 | 8.1081 |
| 0.3255 | 51.0 | 7395 | 2.5254 | 0.4921 | 0.2634 | 0.4346 | 0.4352 | 0.8738 | 0.8691 | 8.4535 | 16 | 3 | 13.027 | 10.2102 |
| 0.32 | 52.0 | 7540 | 2.5460 | 0.4991 | 0.2727 | 0.4423 | 0.4421 | 0.8745 | 0.873 | 8.8919 | 16 | 3 | 13.5736 | 11.7117 |
| 0.3154 | 53.0 | 7685 | 2.5446 | 0.5027 | 0.2712 | 0.4463 | 0.4463 | 0.8768 | 0.8734 | 8.6426 | 16 | 3 | 13.2973 | 11.1111 |
| 0.3293 | 54.0 | 7830 | 2.5378 | 0.4928 | 0.2669 | 0.4352 | 0.4354 | 0.8736 | 0.869 | 8.5225 | 16 | 3 | 13.1291 | 10.2102 |
| 0.3231 | 55.0 | 7975 | 2.5905 | 0.4949 | 0.2678 | 0.4378 | 0.4375 | 0.8743 | 0.8714 | 8.6426 | 15 | 3 | 13.3003 | 9.009 |
| 0.3239 | 56.0 | 8120 | 2.5884 | 0.4969 | 0.2697 | 0.4399 | 0.4399 | 0.8737 | 0.8712 | 8.6697 | 16 | 3 | 13.3754 | 10.5105 |
| 0.3174 | 57.0 | 8265 | 2.5500 | 0.4958 | 0.267 | 0.4389 | 0.4386 | 0.8739 | 0.8715 | 8.7327 | 16 | 4 | 13.3844 | 10.5105 |
| 0.3209 | 58.0 | 8410 | 2.5804 | 0.4989 | 0.2706 | 0.442 | 0.4426 | 0.8751 | 0.8717 | 8.5766 | 15 | 3 | 13.1952 | 9.3093 |
| 0.3297 | 59.0 | 8555 | 2.5909 | 0.494 | 0.2622 | 0.4343 | 0.4338 | 0.8733 | 0.8698 | 8.5976 | 16 | 3 | 13.1652 | 11.7117 |
| 0.3226 | 60.0 | 8700 | 2.5857 | 0.4976 | 0.2639 | 0.4377 | 0.438 | 0.8753 | 0.8701 | 8.3904 | 17 | 3 | 12.973 | 7.8078 |
| 0.3241 | 61.0 | 8845 | 2.5824 | 0.5011 | 0.2698 | 0.4428 | 0.4436 | 0.8764 | 0.8725 | 8.5345 | 16 | 3 | 13.1502 | 10.5105 |
| 0.3201 | 62.0 | 8990 | 2.6156 | 0.4968 | 0.2673 | 0.4371 | 0.4372 | 0.8755 | 0.8702 | 8.3904 | 16 | 3 | 12.979 | 6.9069 |
| 0.3234 | 63.0 | 9135 | 2.6374 | 0.4945 | 0.2677 | 0.4387 | 0.4388 | 0.8744 | 0.8693 | 8.4444 | 17 | 3 | 12.958 | 8.1081 |
| 0.3246 | 64.0 | 9280 | 2.6338 | 0.4912 | 0.2672 | 0.4396 | 0.4402 | 0.8738 | 0.8698 | 8.4955 | 17 | 3 | 13.1021 | 8.1081 |
| 0.3188 | 65.0 | 9425 | 2.6206 | 0.4999 | 0.2739 | 0.4443 | 0.4444 | 0.8763 | 0.8726 | 8.6006 | 17 | 3 | 13.2042 | 10.5105 |
| 0.3186 | 66.0 | 9570 | 2.6499 | 0.5007 | 0.2771 | 0.4462 | 0.4463 | 0.8765 | 0.8729 | 8.5375 | 17 | 3 | 13.2162 | 9.3093 |
| 0.319 | 67.0 | 9715 | 2.6488 | 0.5023 | 0.2715 | 0.4452 | 0.4454 | 0.8761 | 0.8736 | 8.6817 | 17 | 3 | 13.3904 | 10.2102 |
| 0.3328 | 68.0 | 9860 | 2.6238 | 0.5002 | 0.2696 | 0.4408 | 0.4411 | 0.8755 | 0.8717 | 8.5075 | 17 | 3 | 13.1081 | 9.009 |
| 0.3068 | 69.0 | 10005 | 2.6525 | 0.4971 | 0.2684 | 0.4391 | 0.4397 | 0.8755 | 0.8712 | 8.5045 | 17 | 3 | 13.1411 | 11.4114 |
| 0.3192 | 70.0 | 10150 | 2.6494 | 0.4976 | 0.2722 | 0.4395 | 0.4405 | 0.8762 | 0.8714 | 8.3964 | 17 | 3 | 13.033 | 8.4084 |
| 0.3232 | 71.0 | 10295 | 2.6642 | 0.4976 | 0.2717 | 0.4412 | 0.4411 | 0.8756 | 0.8717 | 8.5075 | 17 | 3 | 13.1622 | 9.9099 |
| 0.3084 | 72.0 | 10440 | 2.6596 | 0.4931 | 0.2669 | 0.4352 | 0.4354 | 0.8734 | 0.8696 | 8.4865 | 17 | 3 | 13.1682 | 9.009 |
| 0.313 | 73.0 | 10585 | 2.6551 | 0.4942 | 0.2699 | 0.4363 | 0.4368 | 0.8742 | 0.8699 | 8.4715 | 16 | 3 | 13.1201 | 9.6096 |
| 0.3194 | 74.0 | 10730 | 2.6769 | 0.4962 | 0.2689 | 0.4388 | 0.4389 | 0.874 | 0.8715 | 8.5976 | 17 | 3 | 13.2763 | 10.5105 |
| 0.3143 | 75.0 | 10875 | 2.6860 | 0.493 | 0.2652 | 0.4335 | 0.4343 | 0.8734 | 0.8702 | 8.5706 | 16 | 3 | 13.2462 | 9.3093 |
| 0.3209 | 76.0 | 11020 | 2.6777 | 0.4893 | 0.2592 | 0.4325 | 0.4324 | 0.8726 | 0.869 | 8.5225 | 16 | 3 | 13.2012 | 9.3093 |
| 0.3078 | 77.0 | 11165 | 2.6797 | 0.4877 | 0.261 | 0.4321 | 0.4323 | 0.8724 | 0.8693 | 8.5796 | 16 | 3 | 13.2402 | 9.6096 |
| 0.3192 | 78.0 | 11310 | 2.6812 | 0.495 | 0.2677 | 0.4382 | 0.4383 | 0.8739 | 0.871 | 8.5706 | 18 | 3 | 13.2523 | 10.8108 |
| 0.3147 | 79.0 | 11455 | 2.6777 | 0.495 | 0.2693 | 0.4371 | 0.4374 | 0.874 | 0.8707 | 8.5015 | 16 | 3 | 13.1471 | 9.3093 |
| 0.3049 | 80.0 | 11600 | 2.6767 | 0.4917 | 0.2647 | 0.4344 | 0.4346 | 0.8723 | 0.8696 | 8.5616 | 16 | 3 | 13.2162 | 9.9099 |
| 0.3191 | 81.0 | 11745 | 2.6932 | 0.4929 | 0.2683 | 0.4392 | 0.4392 | 0.8737 | 0.8707 | 8.5676 | 16 | 3 | 13.2342 | 9.6096 |
| 0.3073 | 82.0 | 11890 | 2.7036 | 0.4959 | 0.2699 | 0.4389 | 0.4393 | 0.8738 | 0.8722 | 8.6547 | 17 | 3 | 13.3964 | 10.2102 |
| 0.3129 | 83.0 | 12035 | 2.6941 | 0.4918 | 0.2657 | 0.4341 | 0.434 | 0.8742 | 0.8703 | 8.4985 | 16 | 3 | 13.1411 | 9.3093 |
| 0.3308 | 84.0 | 12180 | 2.6968 | 0.4927 | 0.2659 | 0.4335 | 0.4337 | 0.8737 | 0.8698 | 8.4955 | 16 | 3 | 13.1652 | 9.3093 |
| 0.3221 | 85.0 | 12325 | 2.6966 | 0.4903 | 0.2606 | 0.4306 | 0.4306 | 0.8726 | 0.8698 | 8.5766 | 16 | 3 | 13.2823 | 9.6096 |
| 0.3085 | 86.0 | 12470 | 2.7123 | 0.4862 | 0.2608 | 0.4288 | 0.4286 | 0.8723 | 0.8688 | 8.4595 | 16 | 3 | 13.0901 | 8.7087 |
| 0.3281 | 87.0 | 12615 | 2.7101 | 0.4918 | 0.2638 | 0.4322 | 0.4328 | 0.8731 | 0.8695 | 8.4775 | 16 | 3 | 13.1291 | 9.009 |
| 0.3183 | 88.0 | 12760 | 2.7102 | 0.4902 | 0.2649 | 0.4294 | 0.4301 | 0.873 | 0.8688 | 8.4955 | 16 | 3 | 13.0901 | 9.6096 |
| 0.3063 | 89.0 | 12905 | 2.7198 | 0.4934 | 0.2676 | 0.4338 | 0.4344 | 0.8734 | 0.8692 | 8.4565 | 17 | 3 | 13.0751 | 9.009 |
| 0.3123 | 90.0 | 13050 | 2.7228 | 0.492 | 0.2676 | 0.4338 | 0.4343 | 0.8732 | 0.8692 | 8.4535 | 17 | 3 | 13.0931 | 9.3093 |
| 0.3163 | 91.0 | 13195 | 2.7264 | 0.4953 | 0.2702 | 0.4357 | 0.4358 | 0.874 | 0.8693 | 8.4625 | 17 | 3 | 13.033 | 9.3093 |
| 0.3085 | 92.0 | 13340 | 2.7236 | 0.4934 | 0.2702 | 0.4369 | 0.4369 | 0.8738 | 0.8695 | 8.4925 | 17 | 3 | 13.0721 | 9.9099 |
| 0.3257 | 93.0 | 13485 | 2.7202 | 0.4953 | 0.2706 | 0.4368 | 0.4368 | 0.8746 | 0.8699 | 8.4595 | 16 | 3 | 13.0571 | 10.2102 |
| 0.3092 | 94.0 | 13630 | 2.7261 | 0.4988 | 0.2748 | 0.4415 | 0.4419 | 0.8755 | 0.8708 | 8.4535 | 16 | 3 | 13.0751 | 9.9099 |
| 0.3187 | 95.0 | 13775 | 2.7248 | 0.4968 | 0.2727 | 0.4383 | 0.4389 | 0.8751 | 0.8709 | 8.5075 | 16 | 3 | 13.1321 | 9.9099 |
| 0.3155 | 96.0 | 13920 | 2.7335 | 0.4962 | 0.2686 | 0.4372 | 0.4373 | 0.8749 | 0.8712 | 8.5135 | 16 | 3 | 13.1772 | 10.2102 |
| 0.3271 | 97.0 | 14065 | 2.7384 | 0.4971 | 0.2721 | 0.4396 | 0.4397 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1832 | 10.5105 |
| 0.3096 | 98.0 | 14210 | 2.7400 | 0.496 | 0.2712 | 0.4386 | 0.4385 | 0.8748 | 0.8711 | 8.5225 | 16 | 3 | 13.1682 | 10.2102 |
| 0.3116 | 99.0 | 14355 | 2.7411 | 0.4961 | 0.2712 | 0.4388 | 0.4386 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1592 | 10.2102 |
| 0.3102 | 100.0 | 14500 | 2.7416 | 0.4961 | 0.2712 | 0.4388 | 0.4386 | 0.8749 | 0.8711 | 8.5135 | 16 | 3 | 13.1592 | 10.2102 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
michelecafagna26/vinvl-base-finetuned-hl-actions-image-captioning
|
michelecafagna26
| 2023-09-11T16:03:21Z | 9 | 0 |
pytorch
|
[
"pytorch",
"bert",
"image-to-text",
"en",
"dataset:michelecafagna26/hl",
"arxiv:2302.12189",
"arxiv:2107.12604",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2023-09-11T15:10:26Z |
---
license: apache-2.0
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
- meteor
- spice
- cider
library_name: pytorch
tags:
- pytorch
- image-to-text
---
# Model Card: VinVL for Captioning 🖼️
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) for **action description generation** downstream task.
# Model fine-tuning 🏋️
The model has been finetuned for 10 epochs on the action captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on 🤗 HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl))
# Test set metrics 📈
Obtained with beam size 5 and max length 20
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
| 0.74 | 0.62 | 0.50 | 0.40 | 0.31 | 0.65 | 1.73 | 0.21 |
# Usage and Installation:
More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)
# Feature extraction ⛏️
This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: 🚀
```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer
ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"
# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)
# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)
# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)
# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]
inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)
pred = tensorizer.decode(outputs)
# the output looks like this:
# pred = {0: [{'caption': 'He is sailing', 'conf': 0.7070220112800598]}
```
# Citations 🧾
HL Dataset paper:
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
```
|
michelecafagna26/vinvl-base-finetuned-hl-rationales-image-captioning
|
michelecafagna26
| 2023-09-11T16:03:05Z | 8 | 0 |
pytorch
|
[
"pytorch",
"bert",
"image-to-text",
"en",
"dataset:michelecafagna26/hl",
"arxiv:2302.12189",
"arxiv:2107.12604",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2023-09-11T15:10:48Z |
---
license: apache-2.0
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
- meteor
- spice
- cider
library_name: pytorch
tags:
- pytorch
- image-to-text
---
# Model Card: VinVL for Captioning 🖼️
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) for **rationale description generation** downstream task.
# Model fine-tuning 🏋️
The model has been finetuned for 10 epochs on the rationale captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on 🤗 HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl))
# Test set metrics 📈
Obtained with beam size 5 and max length 20
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
| 0.55 | 0.38 | 0.23 | 0.15 | 0.17 | 0.44 | 0.44 | 0.10 |
# Usage and Installation:
More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)
# Feature extraction ⛏️
This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: 🚀
```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer
ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"
# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)
# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)
# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)
# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]
inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)
pred = tensorizer.decode(outputs)
# the output looks like this:
# pred = {0: [{'caption': 'he is on leisure', 'conf': 0.7070220112800598]}
```
# Citations 🧾
HL Dataset paper:
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
```
|
Atulit23/flan-t5-base-indian-constitution
|
Atulit23
| 2023-09-11T15:55:07Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-11T15:54:25Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-indian-constitution
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-indian-constitution
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
- Rouge1: 29.7093
- Rouge2: 28.4336
- Rougel: 29.6229
- Rougelsum: 29.5617
- Gen Len: 18.9651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 344 | 0.0009 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
| 0.0021 | 2.0 | 688 | 0.0008 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
| 0.0013 | 3.0 | 1032 | 0.0008 | 29.7093 | 28.4336 | 29.6229 | 29.5617 | 18.9651 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
emre/detr-resnet-50_finetuned_cppe5
|
emre
| 2023-09-11T15:52:00Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-01-13T22:04:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
base_model: facebook/detr-resnet-50
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
Step Training Loss
300 2.162200
600 2.011000
1200 1.779500
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
geralt/MechDistilGPT2
|
geralt
| 2023-09-11T15:49:22Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Causal Language modeling",
"CLM",
"arxiv:2105.09680",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- Causal Language modeling
- text-generation
- CLM
model_index:
- name: MechDistilGPT2
results:
- task:
name: Causal Language modeling
type: Causal Language modeling
---
# MechDistilGPT2
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Environmental Impact](#environmental-impact)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
This model is fine-tuned on text scraped from 100+ Mechanical/Automotive pdf books.
- **Developed by:** [Ashwin](https://huggingface.co/geralt)
- **Model Type:** Causal Language modeling
- **Language(s):** English
- **License:** [More Information Needed]
- **Parent Model:** See the [DistilGPT2model](https://huggingface.co/distilgpt2) for more information about the Distilled-GPT2 base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2105.09680)
- [GitHub Repo](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb)
## Uses
#### Direct Use
The model can be used for tasks including topic classification, Causal Language modeling and text generation
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
#### Training Data
This model is fine-tuned on text scraped from 100+ Mechanical/Automotive pdf books.
#### Training Procedure
###### Fine-Tuning
* Default Training Args
* Epochs = 3
* Training set = 200k sentences
* Validation set = 40k sentences
###### Framework versions
* Transformers 4.7.0.dev0
* Pytorch 1.8.1+cu111
* Datasets 1.6.2
* Tokenizers 0.10.2
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More information needed]
- **Hours used:** [More information needed]
- **Cloud Provider:** [More information needed]
- **Compute Region:** [More information needed"]
- **Carbon Emitted:** [More information needed]
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("geralt/MechDistilGPT2")
model = AutoModelForCausalLM.from_pretrained("geralt/MechDistilGPT2")
```
|
RyyyT/q-Taxi-v3
|
RyyyT
| 2023-09-11T15:39:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T15:38:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RyyyT/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ProomptEngineer/pe-mugshot-concept
|
ProomptEngineer
| 2023-09-11T15:38:36Z | 38 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:38:31Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEMugShot
widget:
- text: PEMugShot
---
# PE Mugshot [Concept]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-3">Simple lora. Who will go to jail?</h2><h2 id="heading-4">weights 0.8-1 as always</h2>
## Image examples for the model:









|
ProomptEngineer/cute-animals-style
|
ProomptEngineer
| 2023-09-11T15:38:10Z | 48 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:38:06Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PE_CuteAnimals
widget:
- text: PE_CuteAnimals
---
# Cute Animals [Style]

<p>lora to make cute animal illustrations</p><p>Weights of 0.8-1</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><p></p>
## Image examples for the model:









|
Lethargus/Taxi-v3
|
Lethargus
| 2023-09-11T15:37:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T15:32:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Lethargus/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
fundrais123/bert-finetuned-ner
|
fundrais123
| 2023-09-11T15:36:35Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T15:26:01Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9325954072360813
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9412255106294289
- name: Accuracy
type: accuracy
value: 0.986489668570083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9326
- Recall: 0.9500
- F1: 0.9412
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0773 | 1.0 | 1756 | 0.0795 | 0.9096 | 0.9330 | 0.9212 | 0.9794 |
| 0.0414 | 2.0 | 3512 | 0.0585 | 0.9212 | 0.9465 | 0.9337 | 0.9855 |
| 0.0248 | 3.0 | 5268 | 0.0591 | 0.9326 | 0.9500 | 0.9412 | 0.9865 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ProomptEngineer/pe-habsburg-diffusion-style-big-chin
|
ProomptEngineer
| 2023-09-11T15:34:56Z | 17 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:34:53Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEHabsburg
widget:
- text: PEHabsburg
---
# PE Habsburg Diffusion [Style] [Big Chin]

<p>Add some habsburg to your images!</p><p>weights 1-1.4</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
ProomptEngineer/pe-colorportrait-cat-dog-style
|
ProomptEngineer
| 2023-09-11T15:32:35Z | 41 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:32:30Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: pecat
widget:
- text: pecat
---
# PE ColorPortrait Cat&Dog [Style]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-114">This Model generates colorful portraits of cats or dogs duh.</h2><h2 id="heading-115">If color effect fades add colorful to prompt.</h2><h2 id="heading-116">Weights of 0.8-1 </h2>
## Image examples for the model:









|
ProomptEngineer/pe-toonland-style-0
|
ProomptEngineer
| 2023-09-11T15:31:47Z | 39 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:31:44Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEToonLand
widget:
- text: PEToonLand
---
# PE ToonLand [Style]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-3">Create beautiful Landscapes with one.</h2><h2 id="heading-4">Weights 0.8-1.</h2><p></p>
## Image examples for the model:









|
ProomptEngineer/pe-old-school-cartoon-style
|
ProomptEngineer
| 2023-09-11T15:31:26Z | 48 | 11 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:31:24Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEOldCartoonStyle
widget:
- text: PEOldCartoonStyle
---
# PE Old School Cartoon [Style]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-3">Tried to make a lora that creates images in old school cartoon style like mickey mouse or cuphead.</h2><h2 id="heading-4">weight 0.8-1</h2>
## Image examples for the model:









|
Kanakmi/resume_sorter
|
Kanakmi
| 2023-09-11T15:28:35Z | 64 | 2 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-15T13:14:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: distilbert-base-uncased
model-index:
- name: resume_sorter
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# resume_sorter
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6000
- Train Accuracy: 0.9309
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 225, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 3.0338 | 0.3025 | 0 |
| 2.5856 | 0.6257 | 1 |
| 2.1253 | 0.8646 | 2 |
| 1.7760 | 0.9144 | 3 |
| 1.6245 | 0.9309 | 4 |
| 1.5916 | 0.9309 | 5 |
| 1.6000 | 0.9309 | 6 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ProomptEngineer/pe-pencil-drawing-style
|
ProomptEngineer
| 2023-09-11T15:28:31Z | 130 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:28:27Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PEPencilDrawing
widget:
- text: PEPencilDrawing
---
# PE Pencil Drawing [Style]

<p>Pencil Style...</p><p>Weights 0.8-1</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:








|
ProomptEngineer/pe-carpet-rug-style
|
ProomptEngineer
| 2023-09-11T15:27:53Z | 23 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:27:50Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PE_CarpetRugStyle
widget:
- text: PE_CarpetRugStyle
---
# PE Carpet / Rug Style

<p>Traind to add carpet or rug texture to image. Mostly used cartoon characters as training images so it might not work so well for realisitc subjects.</p><p>Weights 0.8-1</p><h2 id="heading-7">If you want to donate:</h2><h2 id="heading-8"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
esperesa/xlm-roberta-base-finetuned-panx-all
|
esperesa
| 2023-09-11T15:23:31Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T15:03:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1828
- F1: 0.8519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2947 | 1.0 | 739 | 0.1879 | 0.8175 |
| 0.152 | 2.0 | 1478 | 0.1853 | 0.8385 |
| 0.0974 | 3.0 | 2217 | 0.1828 | 0.8519 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
Prot10/swinv2-base-patch4-window8-256-for-pre_evaluation
|
Prot10
| 2023-09-11T15:22:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-base-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-base-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-30T11:21:06Z |
---
license: apache-2.0
base_model: microsoft/swinv2-base-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-base-patch4-window8-256-for-pre_evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window8-256-for-pre_evaluation
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4873
- Accuracy: 0.4106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6064 | 1.0 | 16 | 1.5189 | 0.3073 |
| 1.5058 | 2.0 | 32 | 1.5056 | 0.3073 |
| 1.5176 | 3.0 | 48 | 1.5176 | 0.2961 |
| 1.4883 | 4.0 | 64 | 1.5130 | 0.3073 |
| 1.4446 | 5.0 | 80 | 1.4540 | 0.3296 |
| 1.4568 | 6.0 | 96 | 1.5154 | 0.3156 |
| 1.4106 | 7.0 | 112 | 1.4272 | 0.3883 |
| 1.3804 | 8.0 | 128 | 1.4185 | 0.3743 |
| 1.3725 | 9.0 | 144 | 1.3943 | 0.3911 |
| 1.3441 | 10.0 | 160 | 1.4510 | 0.4022 |
| 1.3335 | 11.0 | 176 | 1.4337 | 0.3827 |
| 1.3055 | 12.0 | 192 | 1.4633 | 0.3855 |
| 1.3303 | 13.0 | 208 | 1.4674 | 0.3883 |
| 1.2882 | 14.0 | 224 | 1.4388 | 0.3911 |
| 1.2362 | 15.0 | 240 | 1.4676 | 0.3855 |
| 1.2572 | 16.0 | 256 | 1.4805 | 0.3799 |
| 1.2164 | 17.0 | 272 | 1.4717 | 0.3939 |
| 1.221 | 18.0 | 288 | 1.4354 | 0.4078 |
| 1.1713 | 19.0 | 304 | 1.4836 | 0.4078 |
| 1.18 | 20.0 | 320 | 1.4873 | 0.4106 |
| 1.1349 | 21.0 | 336 | 1.4853 | 0.3855 |
| 1.1138 | 22.0 | 352 | 1.4927 | 0.3966 |
| 1.1402 | 23.0 | 368 | 1.4672 | 0.3994 |
| 1.1183 | 24.0 | 384 | 1.5033 | 0.4022 |
| 1.0834 | 25.0 | 400 | 1.5448 | 0.3855 |
| 1.0515 | 26.0 | 416 | 1.5131 | 0.3939 |
| 1.0745 | 27.0 | 432 | 1.5314 | 0.3827 |
| 1.0332 | 28.0 | 448 | 1.5474 | 0.3939 |
| 1.0679 | 29.0 | 464 | 1.5327 | 0.3855 |
| 1.0295 | 30.0 | 480 | 1.5402 | 0.3855 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ProomptEngineer/pe-neon-sign-style
|
ProomptEngineer
| 2023-09-11T15:21:13Z | 587 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-11T15:21:08Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PENeonSign
widget:
- text: PENeonSign
---
# PE Neon Sign [Style]

<p>you favorite character as a neon sign...</p><p>weights 0.8-1</p><h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2>
## Image examples for the model:









|
Lethargus/q-FrozenLake-v1-4x4-noSlippery
|
Lethargus
| 2023-09-11T15:20:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T15:17:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Lethargus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
nikzarifie/wafer_technology
|
nikzarifie
| 2023-09-11T15:19:20Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-11T15:17:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: wafer_technology
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# wafer_technology
## Example Images
#### A

#### B

#### C

#### D

#### E

#### F

|
TheAutonomous/HumorGPT
|
TheAutonomous
| 2023-09-11T15:12:50Z | 0 | 1 | null |
[
"license:gpl",
"region:us"
] | null | 2023-09-11T15:07:20Z |
---
license: gpl
---
I trained a distilgpt2 model (trained it on CPU since that was the easiest for me to train it on) on a bunch of 5 minute improv sketches in an attempt to generalize the data.
Example Results (Temperature 0.8):
Scene: Sunshine
Person1: Ahh beautiful sunshine
Person2: I love the way it bounces off your - beautiful face
Person1: You were going to say something else there
Person2: I resisted the bald joke because it's
Scene: Body Cancel: It's okay, pal.
Person1: Hey it's okay
Person2:I just um... I just need this time.
Person1: You want us to leave?
Person2: Yeah
Person: It's okay, pal.
Person1: Hey it's okay
Person2:I just um... I just need this time.
Person1: You want us to leave?
Person2: Yeah
how are you feeling bad for doing you's's?
Person2: I know you might not be as bad as I thought.
Person1: I just um... I just need this time.
Person2: You want us to leave for doing you's's?
Person2: I know you might not be as bad as I thought.
Person1: I just um... I just need this time.
Person2: You want us to leave?
Memes: I ain't saying we're gonna not have a fight we lost, is it?
Person1: We ain't saying we're gonna have a fight we lost, is it?
**End**
Scene: Digging
I ain't saying we're gonna not have a fight we lost, is it?
Person1: We ain't saying we're gonna have a fight we lost, is it?
**End**
Scene: Digging
Person
|
saattrupdan/employment-contract-ner-da
|
saattrupdan
| 2023-09-11T15:12:48Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"da",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- da
license: mit
widget:
- Medarbejderen starter arbejdet den 1. januar 2020 og afslutter arbejdet den 21.
januar 2020. Den ugentlige arbejdstid er 37 timer, og medarbejderen bliver aflønnet
med 23.000,00 kr. om måneden. Arbejdsstedet er Supervej 21, 2000 Frederiksberg.
inference:
parameters:
aggregation_strategy: first
base_model: xlm-roberta-base
model-index:
- name: contract-ner-model-da
results: []
---
# contract-ner-model-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a custom contracts dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Micro F1: 0.9297
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 919
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8971 | 0.24 | 200 | 0.0205 | 0.0 |
| 0.0173 | 0.48 | 400 | 0.0100 | 0.2921 |
| 0.0092 | 0.73 | 600 | 0.0065 | 0.7147 |
| 0.0063 | 0.97 | 800 | 0.0046 | 0.8332 |
| 0.0047 | 1.21 | 1000 | 0.0047 | 0.8459 |
| 0.0042 | 1.45 | 1200 | 0.0039 | 0.8694 |
| 0.0037 | 1.69 | 1400 | 0.0035 | 0.8888 |
| 0.0032 | 1.93 | 1600 | 0.0035 | 0.8840 |
| 0.0025 | 2.18 | 1800 | 0.0029 | 0.8943 |
| 0.0023 | 2.42 | 2000 | 0.0024 | 0.9104 |
| 0.0023 | 2.66 | 2200 | 0.0032 | 0.8808 |
| 0.0021 | 2.9 | 2400 | 0.0022 | 0.9338 |
| 0.0018 | 3.14 | 2600 | 0.0020 | 0.9315 |
| 0.0015 | 3.39 | 2800 | 0.0026 | 0.9297 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
esperesa/xlm-roberta-base-finetuned-panx-it
|
esperesa
| 2023-09-11T15:08:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T15:03:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8218390804597702
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2503
- F1: 0.8218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8253 | 1.0 | 70 | 0.3503 | 0.7160 |
| 0.2781 | 2.0 | 140 | 0.2643 | 0.8148 |
| 0.1871 | 3.0 | 210 | 0.2503 | 0.8218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
intellectusartificialis/controlnetv11
|
intellectusartificialis
| 2023-09-11T15:08:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-11T12:52:54Z |
---
license: creativeml-openrail-m
---
|
moonlightnexus/realize
|
moonlightnexus
| 2023-09-11T15:07:50Z | 37 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-11T09:26:08Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
|
esperesa/xlm-roberta-base-finetuned-panx-fr
|
esperesa
| 2023-09-11T15:06:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T15:02:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8115649689023365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- F1: 0.8116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7671 | 1.0 | 96 | 0.3643 | 0.7537 |
| 0.325 | 2.0 | 192 | 0.3360 | 0.7977 |
| 0.2209 | 3.0 | 288 | 0.3184 | 0.8116 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
ldos/text_shortening_model_v30
|
ldos
| 2023-09-11T15:05:21Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-11T14:06:20Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v30
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6784
- Rouge1: 0.4871
- Rouge2: 0.2579
- Rougel: 0.428
- Rougelsum: 0.4272
- Bert precision: 0.8743
- Bert recall: 0.8706
- Average word count: 8.4775
- Max word count: 17
- Min word count: 3
- Average token count: 12.9249
- % shortened texts with length > 12: 9.3093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.2044 | 1.0 | 145 | 1.6064 | 0.5052 | 0.2865 | 0.4472 | 0.448 | 0.8751 | 0.8756 | 8.8979 | 17 | 3 | 13.4024 | 12.6126 |
| 1.0041 | 2.0 | 290 | 1.4900 | 0.5154 | 0.2921 | 0.4554 | 0.4542 | 0.8735 | 0.878 | 9.3724 | 17 | 3 | 13.8529 | 17.7177 |
| 0.8935 | 3.0 | 435 | 1.4617 | 0.5181 | 0.2968 | 0.4607 | 0.4622 | 0.8751 | 0.8818 | 9.4024 | 16 | 4 | 14.1171 | 17.1171 |
| 0.8028 | 4.0 | 580 | 1.4744 | 0.5103 | 0.2966 | 0.4497 | 0.4496 | 0.8797 | 0.8725 | 8.1982 | 17 | 4 | 12.5706 | 8.1081 |
| 0.7395 | 5.0 | 725 | 1.4797 | 0.5121 | 0.3016 | 0.4548 | 0.4554 | 0.8796 | 0.8761 | 8.4985 | 16 | 3 | 12.985 | 10.8108 |
| 0.6986 | 6.0 | 870 | 1.5154 | 0.5218 | 0.2987 | 0.4554 | 0.4542 | 0.8808 | 0.879 | 8.7297 | 16 | 4 | 13.0691 | 14.1141 |
| 0.6527 | 7.0 | 1015 | 1.5347 | 0.5083 | 0.2876 | 0.4494 | 0.4485 | 0.8797 | 0.8763 | 8.5526 | 16 | 4 | 13.012 | 11.4114 |
| 0.588 | 8.0 | 1160 | 1.5578 | 0.4984 | 0.2752 | 0.4403 | 0.4399 | 0.8746 | 0.8728 | 8.6336 | 17 | 4 | 13.006 | 10.8108 |
| 0.5705 | 9.0 | 1305 | 1.6569 | 0.5152 | 0.2902 | 0.4544 | 0.454 | 0.8803 | 0.8764 | 8.5135 | 16 | 4 | 13.1592 | 9.9099 |
| 0.5601 | 10.0 | 1450 | 1.6651 | 0.5246 | 0.2837 | 0.4572 | 0.4579 | 0.8777 | 0.8807 | 8.979 | 16 | 4 | 13.6607 | 12.012 |
| 0.523 | 11.0 | 1595 | 1.7085 | 0.5149 | 0.2854 | 0.4508 | 0.4507 | 0.879 | 0.8789 | 8.7718 | 17 | 4 | 13.2613 | 10.8108 |
| 0.5032 | 12.0 | 1740 | 1.7886 | 0.5107 | 0.2817 | 0.4457 | 0.4457 | 0.8778 | 0.8772 | 8.8378 | 17 | 4 | 13.4204 | 11.7117 |
| 0.4872 | 13.0 | 1885 | 1.8073 | 0.5097 | 0.2808 | 0.4439 | 0.4441 | 0.8786 | 0.8758 | 8.6306 | 16 | 4 | 13.1562 | 9.6096 |
| 0.4703 | 14.0 | 2030 | 1.8436 | 0.5059 | 0.2754 | 0.4456 | 0.4457 | 0.8769 | 0.8756 | 8.6817 | 17 | 4 | 13.1471 | 9.9099 |
| 0.4598 | 15.0 | 2175 | 1.9150 | 0.5148 | 0.2794 | 0.4532 | 0.4532 | 0.8798 | 0.8775 | 8.6907 | 18 | 4 | 13.1021 | 11.4114 |
| 0.4385 | 16.0 | 2320 | 1.9319 | 0.4966 | 0.2666 | 0.4402 | 0.4406 | 0.8771 | 0.8724 | 8.2703 | 16 | 4 | 12.7237 | 7.8078 |
| 0.4306 | 17.0 | 2465 | 1.9821 | 0.5041 | 0.2763 | 0.4449 | 0.4448 | 0.8788 | 0.8752 | 8.5105 | 16 | 4 | 13.0541 | 9.3093 |
| 0.4154 | 18.0 | 2610 | 2.0345 | 0.5066 | 0.2746 | 0.4467 | 0.4461 | 0.8796 | 0.8732 | 8.1922 | 16 | 3 | 12.6186 | 7.8078 |
| 0.3995 | 19.0 | 2755 | 2.0671 | 0.4954 | 0.2707 | 0.4411 | 0.4416 | 0.8773 | 0.8721 | 8.4505 | 17 | 4 | 12.8468 | 8.7087 |
| 0.4053 | 20.0 | 2900 | 2.1265 | 0.4975 | 0.2704 | 0.4365 | 0.4364 | 0.8767 | 0.873 | 8.5075 | 17 | 3 | 13.0571 | 9.009 |
| 0.3812 | 21.0 | 3045 | 2.2077 | 0.5011 | 0.2733 | 0.4406 | 0.4411 | 0.8764 | 0.8756 | 8.7958 | 17 | 3 | 13.4084 | 12.012 |
| 0.3856 | 22.0 | 3190 | 2.2043 | 0.4956 | 0.2603 | 0.4358 | 0.4361 | 0.8775 | 0.8729 | 8.2913 | 17 | 3 | 12.8078 | 8.7087 |
| 0.3805 | 23.0 | 3335 | 2.2201 | 0.5015 | 0.2698 | 0.4421 | 0.4427 | 0.8789 | 0.8728 | 8.2402 | 17 | 3 | 12.5856 | 8.1081 |
| 0.3741 | 24.0 | 3480 | 2.2269 | 0.5029 | 0.2652 | 0.4412 | 0.4413 | 0.8767 | 0.8743 | 8.5856 | 16 | 4 | 13.039 | 10.2102 |
| 0.3697 | 25.0 | 3625 | 2.2596 | 0.4956 | 0.2674 | 0.436 | 0.4359 | 0.8765 | 0.8728 | 8.4895 | 17 | 4 | 12.9129 | 9.9099 |
| 0.3663 | 26.0 | 3770 | 2.2506 | 0.4891 | 0.2572 | 0.432 | 0.432 | 0.8749 | 0.8716 | 8.4865 | 17 | 4 | 12.8498 | 6.9069 |
| 0.3409 | 27.0 | 3915 | 2.2893 | 0.4958 | 0.2635 | 0.4328 | 0.4327 | 0.8772 | 0.8727 | 8.3994 | 17 | 3 | 12.8228 | 9.6096 |
| 0.3524 | 28.0 | 4060 | 2.3127 | 0.4907 | 0.2597 | 0.4322 | 0.4329 | 0.8751 | 0.8712 | 8.4084 | 16 | 4 | 12.7718 | 8.1081 |
| 0.3379 | 29.0 | 4205 | 2.3167 | 0.4958 | 0.2674 | 0.4374 | 0.4368 | 0.8772 | 0.8737 | 8.4234 | 16 | 4 | 12.8138 | 7.2072 |
| 0.3472 | 30.0 | 4350 | 2.3157 | 0.4987 | 0.2713 | 0.4415 | 0.4403 | 0.8788 | 0.8736 | 8.3634 | 17 | 3 | 12.6517 | 7.2072 |
| 0.3353 | 31.0 | 4495 | 2.3506 | 0.4991 | 0.2631 | 0.4375 | 0.436 | 0.8764 | 0.8744 | 8.6396 | 17 | 4 | 13.1502 | 9.6096 |
| 0.3466 | 32.0 | 4640 | 2.3594 | 0.4897 | 0.2593 | 0.4307 | 0.4301 | 0.8777 | 0.8711 | 8.1712 | 16 | 4 | 12.6126 | 5.4054 |
| 0.3406 | 33.0 | 4785 | 2.3632 | 0.495 | 0.2746 | 0.4401 | 0.4397 | 0.8772 | 0.8732 | 8.5556 | 16 | 4 | 13.027 | 8.4084 |
| 0.3382 | 34.0 | 4930 | 2.3505 | 0.4856 | 0.261 | 0.4306 | 0.4295 | 0.8758 | 0.8693 | 8.2733 | 17 | 3 | 12.6366 | 7.5075 |
| 0.3392 | 35.0 | 5075 | 2.3665 | 0.4972 | 0.2719 | 0.4376 | 0.4372 | 0.8764 | 0.8741 | 8.6847 | 17 | 4 | 13.1532 | 9.3093 |
| 0.3465 | 36.0 | 5220 | 2.3837 | 0.4981 | 0.2722 | 0.441 | 0.4411 | 0.876 | 0.8738 | 8.6607 | 17 | 4 | 13.1982 | 12.3123 |
| 0.3377 | 37.0 | 5365 | 2.3984 | 0.4832 | 0.2623 | 0.4294 | 0.4285 | 0.8737 | 0.8697 | 8.5225 | 17 | 4 | 12.9399 | 10.5105 |
| 0.3523 | 38.0 | 5510 | 2.3843 | 0.495 | 0.2671 | 0.438 | 0.4368 | 0.8754 | 0.873 | 8.5886 | 17 | 3 | 13.1111 | 7.2072 |
| 0.3261 | 39.0 | 5655 | 2.4337 | 0.4948 | 0.2666 | 0.4378 | 0.4369 | 0.8771 | 0.8726 | 8.4655 | 17 | 4 | 12.8919 | 9.009 |
| 0.3262 | 40.0 | 5800 | 2.4149 | 0.4971 | 0.2691 | 0.438 | 0.4375 | 0.8772 | 0.8717 | 8.4505 | 16 | 4 | 12.9249 | 8.1081 |
| 0.3307 | 41.0 | 5945 | 2.4352 | 0.4834 | 0.2585 | 0.4261 | 0.4256 | 0.8746 | 0.8697 | 8.4024 | 17 | 3 | 12.8859 | 9.6096 |
| 0.3226 | 42.0 | 6090 | 2.4241 | 0.488 | 0.2584 | 0.4318 | 0.4315 | 0.8756 | 0.8706 | 8.4444 | 17 | 3 | 12.8288 | 8.7087 |
| 0.34 | 43.0 | 6235 | 2.4485 | 0.4891 | 0.2589 | 0.4326 | 0.432 | 0.8758 | 0.8705 | 8.3243 | 17 | 4 | 12.7898 | 6.6066 |
| 0.3425 | 44.0 | 6380 | 2.4457 | 0.4865 | 0.26 | 0.4293 | 0.4287 | 0.8733 | 0.8713 | 8.6336 | 16 | 3 | 13.1922 | 9.6096 |
| 0.3201 | 45.0 | 6525 | 2.4535 | 0.4811 | 0.2473 | 0.4243 | 0.4237 | 0.8751 | 0.8697 | 8.3093 | 17 | 3 | 12.7748 | 8.4084 |
| 0.3094 | 46.0 | 6670 | 2.4918 | 0.4916 | 0.2614 | 0.4351 | 0.4342 | 0.8758 | 0.8726 | 8.5706 | 17 | 3 | 13.039 | 10.2102 |
| 0.3262 | 47.0 | 6815 | 2.4839 | 0.4822 | 0.255 | 0.425 | 0.4237 | 0.8719 | 0.869 | 8.5375 | 17 | 4 | 12.976 | 9.009 |
| 0.3186 | 48.0 | 6960 | 2.4966 | 0.486 | 0.2492 | 0.4276 | 0.4264 | 0.8738 | 0.8707 | 8.4745 | 17 | 3 | 12.955 | 6.6066 |
| 0.3231 | 49.0 | 7105 | 2.4978 | 0.4889 | 0.2661 | 0.4343 | 0.434 | 0.8767 | 0.871 | 8.4505 | 17 | 3 | 12.8468 | 9.009 |
| 0.3294 | 50.0 | 7250 | 2.4731 | 0.4916 | 0.2683 | 0.4374 | 0.4373 | 0.877 | 0.8726 | 8.4955 | 17 | 4 | 12.9369 | 9.3093 |
| 0.3172 | 51.0 | 7395 | 2.4922 | 0.4861 | 0.2573 | 0.4314 | 0.431 | 0.8759 | 0.87 | 8.3003 | 17 | 4 | 12.6907 | 7.8078 |
| 0.3247 | 52.0 | 7540 | 2.5044 | 0.4802 | 0.2495 | 0.4281 | 0.4282 | 0.8737 | 0.8698 | 8.4715 | 17 | 4 | 12.9009 | 8.1081 |
| 0.3132 | 53.0 | 7685 | 2.5168 | 0.4832 | 0.2558 | 0.4273 | 0.4268 | 0.8736 | 0.8703 | 8.5706 | 17 | 3 | 12.967 | 9.3093 |
| 0.3285 | 54.0 | 7830 | 2.5296 | 0.4882 | 0.26 | 0.4323 | 0.4319 | 0.8754 | 0.8724 | 8.5495 | 17 | 3 | 13.0541 | 8.7087 |
| 0.3111 | 55.0 | 7975 | 2.5529 | 0.4829 | 0.2561 | 0.4268 | 0.4262 | 0.874 | 0.8694 | 8.4474 | 17 | 3 | 12.9339 | 7.2072 |
| 0.3194 | 56.0 | 8120 | 2.5903 | 0.49 | 0.2614 | 0.4337 | 0.4329 | 0.8747 | 0.8719 | 8.5946 | 17 | 3 | 13.0931 | 8.1081 |
| 0.3144 | 57.0 | 8265 | 2.5787 | 0.4859 | 0.2593 | 0.4315 | 0.4303 | 0.8739 | 0.8698 | 8.5195 | 17 | 4 | 12.8679 | 8.4084 |
| 0.2972 | 58.0 | 8410 | 2.5759 | 0.4848 | 0.2565 | 0.4291 | 0.4279 | 0.8738 | 0.8697 | 8.5165 | 17 | 3 | 12.9219 | 8.1081 |
| 0.3209 | 59.0 | 8555 | 2.5609 | 0.4792 | 0.246 | 0.4212 | 0.4201 | 0.8723 | 0.8678 | 8.4114 | 17 | 3 | 12.8799 | 6.9069 |
| 0.3148 | 60.0 | 8700 | 2.5758 | 0.481 | 0.2454 | 0.4243 | 0.4231 | 0.874 | 0.8688 | 8.3664 | 16 | 3 | 12.7628 | 7.5075 |
| 0.3026 | 61.0 | 8845 | 2.5819 | 0.4804 | 0.2555 | 0.4231 | 0.4231 | 0.8738 | 0.8689 | 8.4204 | 17 | 3 | 12.7628 | 8.4084 |
| 0.3074 | 62.0 | 8990 | 2.5882 | 0.4893 | 0.2627 | 0.431 | 0.4303 | 0.8753 | 0.8715 | 8.4895 | 17 | 3 | 12.8889 | 8.7087 |
| 0.3013 | 63.0 | 9135 | 2.5865 | 0.4835 | 0.2599 | 0.426 | 0.4251 | 0.8743 | 0.8707 | 8.4865 | 17 | 4 | 12.964 | 8.7087 |
| 0.3274 | 64.0 | 9280 | 2.5957 | 0.4928 | 0.2649 | 0.436 | 0.4353 | 0.8738 | 0.8734 | 8.8018 | 17 | 3 | 13.2823 | 11.4114 |
| 0.2928 | 65.0 | 9425 | 2.5846 | 0.4888 | 0.2653 | 0.4365 | 0.4356 | 0.8763 | 0.8713 | 8.2973 | 17 | 3 | 12.6637 | 8.1081 |
| 0.3261 | 66.0 | 9570 | 2.5704 | 0.4901 | 0.267 | 0.4386 | 0.4374 | 0.8759 | 0.871 | 8.3303 | 17 | 4 | 12.7838 | 6.6066 |
| 0.3153 | 67.0 | 9715 | 2.6023 | 0.4897 | 0.2611 | 0.4311 | 0.4301 | 0.8749 | 0.872 | 8.6426 | 17 | 3 | 13.0691 | 10.8108 |
| 0.3185 | 68.0 | 9860 | 2.5831 | 0.4862 | 0.2579 | 0.4257 | 0.4247 | 0.8735 | 0.8718 | 8.6486 | 17 | 4 | 13.1441 | 12.012 |
| 0.3054 | 69.0 | 10005 | 2.5949 | 0.4831 | 0.2575 | 0.4247 | 0.4239 | 0.8728 | 0.87 | 8.5405 | 17 | 4 | 13.036 | 9.9099 |
| 0.3006 | 70.0 | 10150 | 2.5822 | 0.4853 | 0.252 | 0.4255 | 0.4243 | 0.8735 | 0.87 | 8.5495 | 17 | 3 | 13.0 | 10.5105 |
| 0.3092 | 71.0 | 10295 | 2.5743 | 0.4903 | 0.2595 | 0.432 | 0.4315 | 0.8759 | 0.8719 | 8.4474 | 17 | 3 | 12.8559 | 8.7087 |
| 0.2928 | 72.0 | 10440 | 2.5905 | 0.4918 | 0.2665 | 0.4356 | 0.4347 | 0.876 | 0.8724 | 8.4474 | 17 | 4 | 12.8679 | 8.4084 |
| 0.3021 | 73.0 | 10585 | 2.6171 | 0.4957 | 0.266 | 0.4368 | 0.4354 | 0.8764 | 0.873 | 8.5676 | 17 | 3 | 12.964 | 11.1111 |
| 0.3047 | 74.0 | 10730 | 2.6233 | 0.492 | 0.2655 | 0.4341 | 0.4328 | 0.8753 | 0.8715 | 8.5736 | 17 | 3 | 12.952 | 10.5105 |
| 0.3043 | 75.0 | 10875 | 2.6405 | 0.4887 | 0.2623 | 0.4318 | 0.4309 | 0.8756 | 0.8704 | 8.4895 | 17 | 3 | 12.8679 | 9.9099 |
| 0.305 | 76.0 | 11020 | 2.6171 | 0.4942 | 0.2687 | 0.4381 | 0.4372 | 0.8766 | 0.8724 | 8.5586 | 17 | 3 | 12.9369 | 10.8108 |
| 0.3127 | 77.0 | 11165 | 2.6289 | 0.4959 | 0.2646 | 0.4366 | 0.4357 | 0.8767 | 0.8731 | 8.5766 | 17 | 3 | 13.006 | 12.012 |
| 0.2945 | 78.0 | 11310 | 2.6453 | 0.4881 | 0.2589 | 0.4272 | 0.4261 | 0.8753 | 0.8711 | 8.5375 | 17 | 3 | 12.8739 | 9.3093 |
| 0.2844 | 79.0 | 11455 | 2.6543 | 0.4895 | 0.2565 | 0.4294 | 0.4288 | 0.8753 | 0.8718 | 8.5616 | 17 | 3 | 12.997 | 11.7117 |
| 0.3188 | 80.0 | 11600 | 2.6556 | 0.4919 | 0.2677 | 0.4328 | 0.4318 | 0.8756 | 0.8712 | 8.5345 | 17 | 3 | 12.973 | 9.9099 |
| 0.2857 | 81.0 | 11745 | 2.6696 | 0.4914 | 0.2666 | 0.434 | 0.4332 | 0.8761 | 0.8717 | 8.4595 | 17 | 3 | 12.8829 | 10.5105 |
| 0.3091 | 82.0 | 11890 | 2.6577 | 0.4986 | 0.2718 | 0.4397 | 0.4388 | 0.8766 | 0.8741 | 8.6276 | 17 | 3 | 13.1441 | 10.8108 |
| 0.3115 | 83.0 | 12035 | 2.6720 | 0.4944 | 0.266 | 0.4364 | 0.4351 | 0.8766 | 0.8725 | 8.4925 | 17 | 3 | 12.9309 | 9.3093 |
| 0.2947 | 84.0 | 12180 | 2.6490 | 0.4955 | 0.2628 | 0.4347 | 0.4343 | 0.8767 | 0.873 | 8.4985 | 17 | 3 | 13.018 | 7.5075 |
| 0.312 | 85.0 | 12325 | 2.6425 | 0.4928 | 0.2689 | 0.4364 | 0.4358 | 0.8763 | 0.8728 | 8.5766 | 17 | 3 | 13.0631 | 9.9099 |
| 0.3081 | 86.0 | 12470 | 2.6314 | 0.4904 | 0.2648 | 0.4327 | 0.432 | 0.875 | 0.8722 | 8.6246 | 17 | 3 | 13.1411 | 10.5105 |
| 0.3043 | 87.0 | 12615 | 2.6485 | 0.4863 | 0.259 | 0.4273 | 0.4259 | 0.8736 | 0.8709 | 8.5736 | 17 | 3 | 13.0901 | 9.6096 |
| 0.3034 | 88.0 | 12760 | 2.6402 | 0.4867 | 0.2604 | 0.4279 | 0.4274 | 0.8739 | 0.871 | 8.5706 | 17 | 3 | 13.0751 | 8.1081 |
| 0.3058 | 89.0 | 12905 | 2.6573 | 0.4926 | 0.2638 | 0.4348 | 0.4339 | 0.8762 | 0.872 | 8.4805 | 17 | 3 | 12.955 | 7.8078 |
| 0.2909 | 90.0 | 13050 | 2.6654 | 0.4955 | 0.2679 | 0.4357 | 0.4342 | 0.8756 | 0.8729 | 8.6817 | 17 | 3 | 13.1802 | 10.2102 |
| 0.3082 | 91.0 | 13195 | 2.6757 | 0.4942 | 0.2671 | 0.4362 | 0.4349 | 0.8756 | 0.8724 | 8.5796 | 17 | 3 | 13.0721 | 9.6096 |
| 0.3016 | 92.0 | 13340 | 2.6791 | 0.4933 | 0.2657 | 0.4351 | 0.4345 | 0.875 | 0.8722 | 8.6336 | 17 | 3 | 13.1441 | 9.9099 |
| 0.2993 | 93.0 | 13485 | 2.6814 | 0.493 | 0.2658 | 0.433 | 0.4318 | 0.8747 | 0.8726 | 8.6997 | 17 | 3 | 13.2462 | 11.1111 |
| 0.3022 | 94.0 | 13630 | 2.6698 | 0.4929 | 0.2638 | 0.4334 | 0.4324 | 0.8751 | 0.8723 | 8.5976 | 17 | 3 | 13.0961 | 9.3093 |
| 0.2921 | 95.0 | 13775 | 2.6665 | 0.4867 | 0.2586 | 0.4294 | 0.4284 | 0.8744 | 0.8709 | 8.4955 | 17 | 3 | 12.988 | 8.4084 |
| 0.3034 | 96.0 | 13920 | 2.6704 | 0.4854 | 0.2574 | 0.4275 | 0.4266 | 0.8742 | 0.8704 | 8.4805 | 17 | 3 | 12.9429 | 8.7087 |
| 0.3063 | 97.0 | 14065 | 2.6749 | 0.4863 | 0.2576 | 0.4275 | 0.4266 | 0.8743 | 0.8707 | 8.4805 | 17 | 3 | 12.9369 | 8.7087 |
| 0.2984 | 98.0 | 14210 | 2.6772 | 0.4858 | 0.258 | 0.4274 | 0.4264 | 0.8739 | 0.8704 | 8.5105 | 17 | 3 | 12.97 | 9.6096 |
| 0.2942 | 99.0 | 14355 | 2.6784 | 0.4872 | 0.2595 | 0.4279 | 0.427 | 0.874 | 0.8704 | 8.5075 | 17 | 3 | 12.967 | 9.6096 |
| 0.2866 | 100.0 | 14500 | 2.6784 | 0.4871 | 0.2579 | 0.428 | 0.4272 | 0.8743 | 0.8706 | 8.4775 | 17 | 3 | 12.9249 | 9.3093 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hanlforever/xlm-roberta-base-finetuned-panx-de-fr
|
hanlforever
| 2023-09-11T15:00:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T13:40:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1650
- F1: 0.8562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2884 | 1.0 | 715 | 0.1855 | 0.8234 |
| 0.1452 | 2.0 | 1430 | 0.1642 | 0.8458 |
| 0.094 | 3.0 | 2145 | 0.1650 | 0.8562 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.11.0
|
Pablo94/racism-finetuned-detests
|
Pablo94
| 2023-09-11T14:58:52Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:davidmasip/racism",
"base_model:finetune:davidmasip/racism",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-14T04:56:57Z |
---
license: cc
base_model: davidmasip/racism
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: racism-finetuned-detests
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# racism-finetuned-detests
This model is a fine-tuned version of [davidmasip/racism](https://huggingface.co/davidmasip/racism) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1626
- Accuracy: 0.8331
- F1-score: 0.7625
- Precision: 0.7625
- Recall: 0.7625
- Auc: 0.7625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Precision | Recall | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|:------:|
| 0.2554 | 1.0 | 174 | 0.3618 | 0.8380 | 0.7340 | 0.7901 | 0.7073 | 0.7073 |
| 0.0488 | 2.0 | 348 | 0.7445 | 0.8282 | 0.7549 | 0.7556 | 0.7543 | 0.7543 |
| 0.0005 | 3.0 | 522 | 0.9204 | 0.8429 | 0.7681 | 0.7794 | 0.7587 | 0.7587 |
| 0.0001 | 4.0 | 696 | 1.0194 | 0.8462 | 0.7741 | 0.7838 | 0.7659 | 0.7659 |
| 0.0001 | 5.0 | 870 | 1.0721 | 0.8363 | 0.7648 | 0.7676 | 0.7621 | 0.7621 |
| 0.0001 | 6.0 | 1044 | 1.1081 | 0.8331 | 0.7625 | 0.7625 | 0.7625 | 0.7625 |
| 0.0 | 7.0 | 1218 | 1.1324 | 0.8331 | 0.7625 | 0.7625 | 0.7625 | 0.7625 |
| 0.0 | 8.0 | 1392 | 1.1492 | 0.8331 | 0.7625 | 0.7625 | 0.7625 | 0.7625 |
| 0.0 | 9.0 | 1566 | 1.1592 | 0.8331 | 0.7625 | 0.7625 | 0.7625 | 0.7625 |
| 0.0 | 10.0 | 1740 | 1.1626 | 0.8331 | 0.7625 | 0.7625 | 0.7625 | 0.7625 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
esperesa/xlm-roberta-base-finetuned-panx-de-fr
|
esperesa
| 2023-09-11T14:56:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-11T14:44:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
sanchit-gandhi/whisper-small-hi-flax
|
sanchit-gandhi
| 2023-09-11T14:46:41Z | 11 | 1 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-11T08:36:19Z |
---
language:
- ar
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
pipeline_tag: automatic-speech-recognition
base_model: openai/whisper-small
model-index:
- name: whisper_small_hi_flax
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: hi
split: test
metrics:
- type: wer
value: 33.96828
name: Wer
---
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13.0 dataset in Flax.
It is trained using the Transformers **Flax** examples script, and achieves the following results on the evaluation set:
- Loss: 0.02091
- Wer: 33.96828
The training run can be reproduced in approximately 25 minutes by executing the script [`run.sh`](https://huggingface.co/sanchit-gandhi/whisper-small-hi-flax/blob/main/run.sh).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_train_epochs: 10
### Training results
See [Tensorboard logs](https://huggingface.co/sanchit-gandhi/whisper-small-hi-flax/tensorboard) for details.
|
gyesibiney/Distilbert-movie-review-sentiment-classifier-2
|
gyesibiney
| 2023-09-11T14:45:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-10T18:57:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Distilbert-capstone_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-capstone_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4272
- Accuracy: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2895 | 1.0 | 623 | 0.2569 | 0.8930 |
| 0.1635 | 2.0 | 1246 | 0.2479 | 0.9171 |
| 0.0911 | 3.0 | 1869 | 0.3438 | 0.9207 |
| 0.053 | 4.0 | 2492 | 0.3986 | 0.9223 |
| 0.011 | 5.0 | 3115 | 0.4272 | 0.9251 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AIYIYA/my_tt
|
AIYIYA
| 2023-09-11T14:42:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-11T14:04:56Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: AIYIYA/my_tt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AIYIYA/my_tt
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0110
- Validation Loss: 1.1941
- Train Accuracy: 0.5185
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.8538 | 1.2004 | 0.5185 | 0 |
| 1.0820 | 1.1683 | 0.5185 | 1 |
| 1.0110 | 1.1941 | 0.5185 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jncraton/LaMini-GPT-774M-ct2-int8
|
jncraton
| 2023-09-11T14:38:50Z | 13 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"arxiv:2304.14402",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-24T21:16:48Z |
---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: text-generation
widget:
- text: 'Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:'
example_title: example
base_model: gpt2-large
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-GPT-774M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to respond to human instructions written in natural language.
Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance.
See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model = checkpoint)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [gpt2-large](https://huggingface.co/gpt2-large) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 774M.
### Training Hyperparameters
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
jncraton/LaMini-GPT-124M-ct2-int8
|
jncraton
| 2023-09-11T14:38:27Z | 563 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"arxiv:2304.14402",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-24T22:21:05Z |
---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: text-generation
widget:
- text: 'Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:'
example_title: example
base_model: gpt2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-GPT-124M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to respond to human instructions written in natural language.
Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance.
See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model = checkpoint)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [gpt2](https://huggingface.co/gpt2) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 124M.
### Training Hyperparameters
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
Pablo94/bert-base-uncased-finetuned-detests
|
Pablo94
| 2023-09-11T14:38:05Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T15:48:17Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-base-uncased-finetuned-detests
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-detests
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5026
- Accuracy: 0.7856
- F1-score: 0.7175
- Precision: 0.7058
- Recall: 0.7369
- Auc: 0.7369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Precision | Recall | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|:------:|
| 0.271 | 1.0 | 174 | 0.4648 | 0.7954 | 0.7005 | 0.7070 | 0.6950 | 0.6950 |
| 0.2246 | 2.0 | 348 | 0.5229 | 0.7987 | 0.7053 | 0.7119 | 0.6997 | 0.6997 |
| 0.3814 | 3.0 | 522 | 0.7043 | 0.7676 | 0.7018 | 0.6896 | 0.7278 | 0.7278 |
| 0.1343 | 4.0 | 696 | 0.8843 | 0.7938 | 0.7217 | 0.7124 | 0.7346 | 0.7346 |
| 0.0063 | 5.0 | 870 | 1.0890 | 0.7807 | 0.7040 | 0.6955 | 0.7159 | 0.7159 |
| 0.063 | 6.0 | 1044 | 1.1208 | 0.8101 | 0.7378 | 0.7316 | 0.7452 | 0.7452 |
| 0.0022 | 7.0 | 1218 | 1.1989 | 0.8249 | 0.7318 | 0.7543 | 0.7166 | 0.7166 |
| 0.0356 | 8.0 | 1392 | 1.5295 | 0.7758 | 0.7151 | 0.7016 | 0.7457 | 0.7457 |
| 0.0002 | 9.0 | 1566 | 1.4269 | 0.8003 | 0.7202 | 0.7171 | 0.7236 | 0.7236 |
| 0.0004 | 10.0 | 1740 | 1.5026 | 0.7856 | 0.7175 | 0.7058 | 0.7369 | 0.7369 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jncraton/LaMini-Flan-T5-77M-ct2-int8
|
jncraton
| 2023-09-11T14:37:59Z | 4 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"instruction fine-tuning",
"text2text-generation",
"en",
"arxiv:2304.14402",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-05T13:24:22Z |
---
language:
- en
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- instruction fine-tuning
pipeline_tag: text2text-generation
widget:
- text: how can I become more healthy?
example_title: example
base_model: google/flan-t5-small
model-index:
- name: flan-t5-small-distil-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Flan-T5-77M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to response to human instructions written in natural language.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text2text-generation', model = checkpoint)
input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 77M.
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
jncraton/LaMini-Flan-T5-248M-ct2-int8
|
jncraton
| 2023-09-11T14:37:41Z | 232 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"instruction fine-tuning",
"text2text-generation",
"en",
"arxiv:2304.14402",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-04T21:36:33Z |
---
language:
- en
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- instruction fine-tuning
pipeline_tag: text2text-generation
widget:
- text: how can I become more healthy?
example_title: example
base_model: google/flan-t5-base
model-index:
- name: flan-t5-small-distil-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Flan-T5-248M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to response to human instructions written in natural language.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text2text-generation', model = checkpoint)
input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 248M.
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
```
|
Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc
|
Jzuluaga
| 2023-09-11T14:30:11Z | 96 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en-atc",
"en",
"generated_from_trainer",
"dataset:Jzuluaga/uwb_atcc",
"arxiv:2203.16822",
"arxiv:2211.04054",
"base_model:facebook/wav2vec2-large-960h-lv60-self",
"base_model:finetune:facebook/wav2vec2-large-960h-lv60-self",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-30T07:59:57Z |
---
language: en
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- generated_from_trainer
datasets:
- Jzuluaga/uwb_atcc
metrics:
- wer
base_model: facebook/wav2vec2-large-960h-lv60-self
model-index:
- name: wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: UWB-ATCC dataset (Air Traffic Control Communications)
type: Jzuluaga/uwb_atcc
config: test
split: test
metrics:
- type: wer
value: 17.2
name: TEST WER
verified: false
- type: wer
value: 13.72
name: TEST WER (+LM)
verified: false
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: ATCOSIM corpus (Air Traffic Control Communications)
type: Jzuluaga/atcosim_corpus
config: test
split: test
metrics:
- type: wer
value: 15.31
name: TEST WER
verified: false
- type: wer
value: 11.88
name: TEST WER (+LM)
verified: false
---
# wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
<a href="https://colab.research.google.com/github/idiap/w2v2-air-traffic/blob/main/src/eval_xlsr_atc_model.ipynb">
<img alt="GitHub" src="https://colab.research.google.com/assets/colab-badge.svg\">
</a>
<a href="https://github.com/idiap/w2v2-air-traffic">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set:
- Loss: 0.7287
- Wer: 0.1756
Paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822).
Authors: Juan Zuluaga-Gomez, Amrutha Prasad, Iuliia Nigmatulina, Saeed Sarfjoo, Petr Motlicek, Matthias Kleinert, Hartmut Helmke, Oliver Ohneiser, Qingran Zhan
Abstract: Recent work on self-supervised pre-training focus</b> on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E)acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.
Code — GitHub repository: https://github.com/idiap/w2v2-air-traffic
## Usage
You can use our Google Colab notebook to run and evaluate our model: https://github.com/idiap/w2v2-air-traffic/blob/master/src/eval_xlsr_atc_model.ipynb
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets, e.g., LibriSpeech or CommonVoice.
## Training and evaluation data
See Table 1 (page 3) in our paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822). We described there the partitions of how to use our model.
- We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
- However, do not worry, we have prepared the database in `Datasets format`. Here, [UWB-ATCC corpus on HuggingFace](https://huggingface.co/datasets/Jzuluaga/uwb_atcc). You can scroll and check the train/test partitions, and even listen to some audios.
- If you want to prepare a database in HuggingFace format, you can follow the data loader script in: [data_loader_atc.py](https://huggingface.co/datasets/Jzuluaga/uwb_atcc/blob/main/atc_data_loader.py).
-
## Writing your own inference script
If you use language model, you need to install the KenLM bindings with:
```bash
conda activate your_environment
pip install https://github.com/kpu/kenlm/archive/master.zip
```
The snippet of code:
```python
from datasets import load_dataset, load_metric, Audio
import torch
from transformers import AutoModelForCTC, Wav2Vec2Processor, Wav2Vec2ProcessorWithLM
import torchaudio.functional as F
USE_LM = False
DATASET_ID = "Jzuluaga/uwb_atcc"
MODEL_ID = "Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc"
# 1. Load the dataset
# we only load the 'test' partition, however, if you want to load the 'train' partition, you can change it accordingly
uwb_atcc_corpus_test = load_dataset(DATASET_ID, "test", split="test")
# 2. Load the model
model = AutoModelForCTC.from_pretrained(MODEL_ID)
# 3. Load the processors, we offer support with LM, which should yield better resutls
if USE_LM:
processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
else:
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
# 4. Format the test sample
sample = next(iter(uwb_atcc_corpus_test))
file_sampling_rate = sample['audio']['sampling_rate']
# resample if neccessary
if file_sampling_rate != 16000:
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), file_sampling_rate, 16000).numpy()
else:
resampled_audio = torch.tensor(sample["audio"]["array"]).numpy()
input_values = processor(resampled_audio, return_tensors="pt").input_values
# 5. Run the forward pass in the model
with torch.no_grad():
logits = model(input_values).logits
# get the transcription with processor
if USE_LM:
transcription = processor.batch_decode(logits.numpy()).text
else:
pred_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(pred_ids)
# print the output
print(transcription)
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.06 | 500 | 2.9016 | 0.9995 |
| 2.877 | 2.12 | 1000 | 0.9812 | 0.3485 |
| 2.877 | 3.18 | 1500 | 0.7842 | 0.2732 |
| 0.7834 | 4.25 | 2000 | 0.6962 | 0.2192 |
| 0.7834 | 5.31 | 2500 | 0.6527 | 0.2042 |
| 0.6084 | 6.37 | 3000 | 0.6220 | 0.1972 |
| 0.6084 | 7.43 | 3500 | 0.6442 | 0.1934 |
| 0.5147 | 8.49 | 4000 | 0.6793 | 0.1950 |
| 0.5147 | 9.55 | 4500 | 0.6432 | 0.1920 |
| 0.4566 | 10.62 | 5000 | 0.6605 | 0.1853 |
| 0.4566 | 11.68 | 5500 | 0.6393 | 0.1866 |
| 0.4155 | 12.74 | 6000 | 0.6918 | 0.1803 |
| 0.4155 | 13.8 | 6500 | 0.6514 | 0.1791 |
| 0.372 | 14.86 | 7000 | 0.7010 | 0.1851 |
| 0.372 | 15.92 | 7500 | 0.6824 | 0.1786 |
| 0.3368 | 16.99 | 8000 | 0.6895 | 0.1780 |
| 0.3368 | 18.05 | 8500 | 0.7150 | 0.1759 |
| 0.3244 | 19.11 | 9000 | 0.7141 | 0.1759 |
| 0.3244 | 20.17 | 9500 | 0.7225 | 0.1756 |
| 0.2981 | 21.23 | 10000 | 0.7287 | 0.1756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
MaxKazak/ruBert-base-russian-emotion-detection
|
MaxKazak
| 2023-09-11T14:27:43Z | 13,789 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment",
"emotion-classification",
"multilabel",
"multiclass",
"ru",
"dataset:Djacon/ru_goemotions",
"base_model:ai-forever/ruBert-base",
"base_model:finetune:ai-forever/ruBert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-28T15:25:35Z |
---
language:
- ru
license: apache-2.0
tags:
- sentiment
- emotion-classification
- multilabel
- multiclass
datasets:
- Djacon/ru_goemotions
metrics:
- accuracy
widget:
- text: Очень рад тебя видеть!
- text: Как дела?
- text: Мне немного отвратно это делать
- text: Я испытал мурашки от страха
- text: Нет ничего радостного в этих горьких новостях
- text: Ого, неожидал тебя здесь увидеть!
- text: Фу ну и мерзость
- text: Мне неприятно общение с тобой
base_model: ai-forever/ruBert-base
model-index:
- name: ruBert-base-russian-emotions-classifier-goEmotions
results:
- task:
type: multilabel-text-classification
name: Multilabel Text Classification
dataset:
name: ru_goemotions
type: Djacon/ru_goemotions
args: ru
metrics:
- type: roc_auc
value: 92%
name: multilabel ROC AUC
---
# ruBert-base-russian-emotions-classifier-goEmotions
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on [Djacon/ru_goemotions](https://huggingface.co/datasets/Djacon/ru_goemotions).
It achieves the following results on the evaluation set (2nd epoch):
- Loss: 0.2088
- AUC: 0.9240
The quality of the predicted probabilities on the test dataset is the following:
| label | joy | interest | surpise | sadness | anger | disgust | fear | guilt | neutral | average |
|----------|--------|----------|---------|---------|--------|---------|--------|--------|---------|---------|
| AUC | 0.9369 | 0.9213 | 0.9325 | 0.8791 | 0.8374 | 0.9041 | 0.9470 | 0.9758 | 0.8518 | 0.9095 |
| F1-micro | 0.9528 | 0.9157 | 0.9697 | 0.9284 | 0.8690 | 0.9658 | 0.9851 | 0.9875 | 0.7654 | 0.9266 |
| F1-macro | 0.8369 | 0.7922 | 0.7561 | 0.7392 | 0.7351 | 0.7356 | 0.8176 | 0.8247 | 0.7650 | 0.7781 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | AUC |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1755 | 1.0 | 1685 | 0.1717 | 0.9220 |
| 0.1391 | 2.0 | 3370 | 0.1757 | 0.9240 |
| 0.0899 | 3.0 | 5055 | 0.2088 | 0.9106 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
Jukaboo/Llama2_7B_chat_dialogsum_ft_adapters_v12100
|
Jukaboo
| 2023-09-11T14:25:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-11T12:57:18Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama2_7B_chat_dialogsum_ft_adapters_v12100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2_7B_chat_dialogsum_ft_adapters_v12100
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
alk/distilbert-base-uncased-finetuned-header-classifier
|
alk
| 2023-09-11T14:24:19Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-24T14:26:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-header-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-header-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
alk/roberta-large-mnli-finetuned-header-classifier
|
alk
| 2023-09-11T14:24:14Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-27T19:21:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-mnli-finetuned-header-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-mnli-finetuned-header-classifier
This model is a fine-tuned version of [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AbdelKarim95/Reinforce-0
|
AbdelKarim95
| 2023-09-11T14:23:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-11T13:04:20Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 445.40 +/- 73.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nightdude/config_8113571
|
nightdude
| 2023-09-11T14:23:28Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-10T14:08:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
sanjeevnara/stablethumbs-dreambooth-multiconcept
|
sanjeevnara
| 2023-09-11T14:15:09Z | 33 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-10T22:48:48Z |
---
license: apache-2.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
Stable Diffusion v1.5 trained using Dreambooth approach to generate 'thumbs-up' style images. Also trained to generate professional Soccer player Vinicius Jr.'s face. <be>
Prompt Guide:
- For a thumbs up style, add 'with a thumbs up' or 'thumbs up gesture' to your prompt e.g. `'photo of Messi with a thumbs up gesture, high quality'.`
- For Vinicius Jr, add the rare token 'xjy' e.g. `'photo of xjy with a thumbs up gesture, high quality'`.
Uses Diffusers library / StableDiffusionPipeline.
|
ldos/text_shortening_model_v29
|
ldos
| 2023-09-11T14:05:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-11T13:17:46Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v29
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6052
- Rouge1: 0.5112
- Rouge2: 0.2802
- Rougel: 0.4539
- Rougelsum: 0.4538
- Bert precision: 0.8765
- Bert recall: 0.8742
- Average word count: 8.8438
- Max word count: 16
- Min word count: 4
- Average token count: 13.4174
- % shortened texts with length > 12: 8.7087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.9361 | 1.0 | 145 | 1.4858 | 0.4996 | 0.2801 | 0.4497 | 0.4507 | 0.8753 | 0.8723 | 8.7808 | 16 | 3 | 13.2372 | 7.2072 |
| 1.4692 | 2.0 | 290 | 1.3868 | 0.5013 | 0.2812 | 0.4477 | 0.4485 | 0.8736 | 0.8731 | 9.0601 | 16 | 3 | 13.7147 | 13.2132 |
| 1.2301 | 3.0 | 435 | 1.3641 | 0.5294 | 0.307 | 0.4735 | 0.474 | 0.8785 | 0.8799 | 9.0961 | 16 | 4 | 13.7327 | 16.8168 |
| 1.049 | 4.0 | 580 | 1.3702 | 0.524 | 0.2979 | 0.4705 | 0.4706 | 0.8782 | 0.8788 | 9.1081 | 16 | 4 | 13.6066 | 13.8138 |
| 0.9261 | 5.0 | 725 | 1.3843 | 0.5424 | 0.3166 | 0.489 | 0.4886 | 0.8829 | 0.8833 | 8.9219 | 17 | 4 | 13.6907 | 8.4084 |
| 0.8067 | 6.0 | 870 | 1.4039 | 0.5269 | 0.3011 | 0.4682 | 0.4684 | 0.8777 | 0.878 | 9.2252 | 17 | 4 | 13.973 | 13.2132 |
| 0.7133 | 7.0 | 1015 | 1.5083 | 0.5168 | 0.3022 | 0.4618 | 0.4613 | 0.8791 | 0.8758 | 8.7447 | 17 | 4 | 13.4655 | 10.2102 |
| 0.6428 | 8.0 | 1160 | 1.4856 | 0.5184 | 0.2907 | 0.4624 | 0.4617 | 0.8804 | 0.8754 | 8.5976 | 16 | 3 | 13.0571 | 9.009 |
| 0.5741 | 9.0 | 1305 | 1.5332 | 0.5231 | 0.3003 | 0.4669 | 0.4673 | 0.8809 | 0.8791 | 8.8829 | 17 | 4 | 13.5706 | 7.5075 |
| 0.5231 | 10.0 | 1450 | 1.5603 | 0.53 | 0.3032 | 0.4725 | 0.4727 | 0.8843 | 0.8775 | 8.4625 | 17 | 4 | 13.033 | 5.7057 |
| 0.4607 | 11.0 | 1595 | 1.6079 | 0.5118 | 0.2821 | 0.4583 | 0.4577 | 0.8777 | 0.8715 | 8.3453 | 16 | 4 | 13.012 | 6.9069 |
| 0.4136 | 12.0 | 1740 | 1.7147 | 0.5136 | 0.2849 | 0.4558 | 0.4556 | 0.8776 | 0.8734 | 8.7297 | 16 | 3 | 13.3874 | 9.3093 |
| 0.3829 | 13.0 | 1885 | 1.7425 | 0.5182 | 0.287 | 0.459 | 0.4591 | 0.8792 | 0.8746 | 8.7207 | 17 | 4 | 13.3934 | 8.1081 |
| 0.3366 | 14.0 | 2030 | 1.7518 | 0.5171 | 0.2871 | 0.4564 | 0.4557 | 0.8796 | 0.8735 | 8.5195 | 16 | 4 | 13.0811 | 5.4054 |
| 0.3076 | 15.0 | 2175 | 1.8555 | 0.5139 | 0.2891 | 0.4581 | 0.4581 | 0.879 | 0.8754 | 8.7658 | 16 | 4 | 13.2973 | 9.9099 |
| 0.2908 | 16.0 | 2320 | 1.8983 | 0.5239 | 0.3011 | 0.4654 | 0.4651 | 0.8799 | 0.8794 | 8.979 | 16 | 4 | 13.6547 | 12.012 |
| 0.2606 | 17.0 | 2465 | 1.9211 | 0.5158 | 0.2875 | 0.4538 | 0.4542 | 0.8774 | 0.8739 | 8.7868 | 17 | 2 | 13.5736 | 12.012 |
| 0.2477 | 18.0 | 2610 | 1.9208 | 0.51 | 0.2872 | 0.4515 | 0.4517 | 0.8774 | 0.8733 | 8.6577 | 17 | 4 | 13.3093 | 10.8108 |
| 0.2195 | 19.0 | 2755 | 1.9720 | 0.5112 | 0.2838 | 0.456 | 0.4559 | 0.8775 | 0.8754 | 8.8799 | 17 | 3 | 13.4835 | 10.8108 |
| 0.1998 | 20.0 | 2900 | 1.9987 | 0.511 | 0.2817 | 0.4526 | 0.4525 | 0.8783 | 0.8751 | 8.7838 | 17 | 3 | 13.4955 | 9.9099 |
| 0.1936 | 21.0 | 3045 | 2.0389 | 0.5066 | 0.2818 | 0.4482 | 0.4485 | 0.8762 | 0.8722 | 8.6186 | 17 | 4 | 13.1231 | 9.009 |
| 0.1813 | 22.0 | 3190 | 2.0735 | 0.5078 | 0.29 | 0.4556 | 0.4562 | 0.8772 | 0.8754 | 8.8198 | 17 | 4 | 13.4895 | 9.3093 |
| 0.1726 | 23.0 | 3335 | 2.0743 | 0.5108 | 0.2901 | 0.458 | 0.4581 | 0.8795 | 0.8736 | 8.4775 | 17 | 2 | 13.0931 | 9.009 |
| 0.164 | 24.0 | 3480 | 2.1380 | 0.5077 | 0.2887 | 0.4578 | 0.4565 | 0.878 | 0.8727 | 8.4474 | 17 | 4 | 13.003 | 5.7057 |
| 0.1506 | 25.0 | 3625 | 2.1435 | 0.5005 | 0.2725 | 0.4456 | 0.4452 | 0.8748 | 0.8717 | 8.6637 | 17 | 4 | 13.2943 | 6.6066 |
| 0.1402 | 26.0 | 3770 | 2.1956 | 0.5114 | 0.2899 | 0.4577 | 0.4571 | 0.8769 | 0.8753 | 8.8709 | 17 | 4 | 13.3544 | 9.3093 |
| 0.138 | 27.0 | 3915 | 2.2175 | 0.5079 | 0.2824 | 0.4544 | 0.4548 | 0.8772 | 0.8739 | 8.6847 | 17 | 4 | 13.3423 | 8.4084 |
| 0.1313 | 28.0 | 4060 | 2.2267 | 0.5048 | 0.2793 | 0.4483 | 0.448 | 0.8747 | 0.8717 | 8.6817 | 17 | 4 | 13.2733 | 9.009 |
| 0.122 | 29.0 | 4205 | 2.2464 | 0.5105 | 0.2813 | 0.4544 | 0.4548 | 0.8746 | 0.8736 | 8.9099 | 18 | 4 | 13.4595 | 10.5105 |
| 0.1195 | 30.0 | 4350 | 2.2419 | 0.5124 | 0.2922 | 0.461 | 0.4609 | 0.8768 | 0.8733 | 8.6637 | 16 | 4 | 13.2883 | 7.5075 |
| 0.1131 | 31.0 | 4495 | 2.2243 | 0.5215 | 0.3025 | 0.4702 | 0.4698 | 0.8802 | 0.878 | 8.7117 | 16 | 4 | 13.3814 | 9.3093 |
| 0.1102 | 32.0 | 4640 | 2.2847 | 0.5078 | 0.2826 | 0.4567 | 0.4559 | 0.8788 | 0.8729 | 8.3904 | 18 | 4 | 12.9099 | 6.3063 |
| 0.1105 | 33.0 | 4785 | 2.2545 | 0.5049 | 0.2759 | 0.4489 | 0.4484 | 0.8762 | 0.8729 | 8.6667 | 18 | 4 | 13.1952 | 9.009 |
| 0.099 | 34.0 | 4930 | 2.2819 | 0.5207 | 0.296 | 0.4662 | 0.4665 | 0.8814 | 0.8775 | 8.6186 | 17 | 4 | 13.1952 | 8.1081 |
| 0.1018 | 35.0 | 5075 | 2.2901 | 0.5133 | 0.2812 | 0.4597 | 0.4597 | 0.8777 | 0.8743 | 8.7237 | 17 | 4 | 13.3243 | 10.8108 |
| 0.0992 | 36.0 | 5220 | 2.3349 | 0.5011 | 0.272 | 0.4442 | 0.4439 | 0.8738 | 0.8722 | 8.9129 | 16 | 2 | 13.5856 | 11.1111 |
| 0.0921 | 37.0 | 5365 | 2.3193 | 0.506 | 0.2816 | 0.4539 | 0.4539 | 0.8776 | 0.8739 | 8.7658 | 16 | 4 | 13.3093 | 8.7087 |
| 0.0936 | 38.0 | 5510 | 2.3404 | 0.5101 | 0.2815 | 0.4565 | 0.4566 | 0.8768 | 0.8754 | 8.8168 | 16 | 4 | 13.4535 | 10.5105 |
| 0.0833 | 39.0 | 5655 | 2.3583 | 0.5026 | 0.2818 | 0.4512 | 0.4509 | 0.8749 | 0.8743 | 8.8709 | 16 | 3 | 13.4955 | 9.3093 |
| 0.0869 | 40.0 | 5800 | 2.3443 | 0.5091 | 0.2855 | 0.4521 | 0.4521 | 0.8769 | 0.8743 | 8.8378 | 16 | 4 | 13.4474 | 11.4114 |
| 0.0783 | 41.0 | 5945 | 2.3609 | 0.5045 | 0.2851 | 0.4519 | 0.4513 | 0.8784 | 0.8738 | 8.5946 | 16 | 4 | 13.1261 | 7.8078 |
| 0.08 | 42.0 | 6090 | 2.4229 | 0.5053 | 0.2774 | 0.4508 | 0.4506 | 0.8769 | 0.8743 | 8.6667 | 16 | 4 | 13.2853 | 8.4084 |
| 0.0792 | 43.0 | 6235 | 2.3731 | 0.5156 | 0.2877 | 0.4618 | 0.4619 | 0.8775 | 0.8771 | 8.955 | 16 | 4 | 13.6937 | 8.7087 |
| 0.075 | 44.0 | 6380 | 2.4058 | 0.5119 | 0.286 | 0.453 | 0.4535 | 0.8761 | 0.8762 | 8.976 | 17 | 3 | 13.7387 | 12.012 |
| 0.0754 | 45.0 | 6525 | 2.3808 | 0.5142 | 0.2894 | 0.4584 | 0.4583 | 0.8772 | 0.8765 | 8.967 | 16 | 4 | 13.6096 | 12.3123 |
| 0.0713 | 46.0 | 6670 | 2.3949 | 0.5093 | 0.2841 | 0.4566 | 0.4568 | 0.8758 | 0.8748 | 8.8559 | 16 | 4 | 13.4775 | 9.9099 |
| 0.066 | 47.0 | 6815 | 2.4103 | 0.5094 | 0.2798 | 0.4551 | 0.4553 | 0.8763 | 0.8753 | 8.9009 | 16 | 4 | 13.4655 | 10.2102 |
| 0.0684 | 48.0 | 6960 | 2.4284 | 0.5021 | 0.2763 | 0.4476 | 0.4465 | 0.8754 | 0.8733 | 8.6727 | 16 | 4 | 13.2162 | 8.7087 |
| 0.0656 | 49.0 | 7105 | 2.4512 | 0.5137 | 0.289 | 0.4584 | 0.4583 | 0.8763 | 0.8748 | 8.8378 | 16 | 4 | 13.4174 | 9.6096 |
| 0.0664 | 50.0 | 7250 | 2.4427 | 0.5106 | 0.2789 | 0.4507 | 0.4501 | 0.8761 | 0.8747 | 8.7327 | 16 | 4 | 13.5255 | 8.4084 |
| 0.0628 | 51.0 | 7395 | 2.4792 | 0.5069 | 0.2802 | 0.4527 | 0.453 | 0.8775 | 0.8751 | 8.7417 | 16 | 2 | 13.3063 | 8.7087 |
| 0.0662 | 52.0 | 7540 | 2.4619 | 0.5103 | 0.281 | 0.4567 | 0.4567 | 0.8776 | 0.874 | 8.6216 | 16 | 3 | 13.1772 | 9.009 |
| 0.0633 | 53.0 | 7685 | 2.4705 | 0.5053 | 0.2785 | 0.4489 | 0.449 | 0.8761 | 0.8735 | 8.7447 | 16 | 4 | 13.3874 | 8.7087 |
| 0.0592 | 54.0 | 7830 | 2.4978 | 0.5133 | 0.2813 | 0.452 | 0.4528 | 0.8769 | 0.8746 | 8.8438 | 16 | 4 | 13.4354 | 9.6096 |
| 0.0577 | 55.0 | 7975 | 2.4823 | 0.5063 | 0.2793 | 0.448 | 0.4488 | 0.8758 | 0.8721 | 8.6036 | 16 | 4 | 13.1111 | 6.9069 |
| 0.0609 | 56.0 | 8120 | 2.4779 | 0.5133 | 0.2797 | 0.4539 | 0.4544 | 0.8764 | 0.8756 | 8.97 | 16 | 3 | 13.5976 | 10.5105 |
| 0.0539 | 57.0 | 8265 | 2.5132 | 0.5096 | 0.2778 | 0.453 | 0.4536 | 0.877 | 0.8734 | 8.7117 | 16 | 4 | 13.3003 | 7.2072 |
| 0.0564 | 58.0 | 8410 | 2.4783 | 0.517 | 0.2872 | 0.4622 | 0.4625 | 0.8778 | 0.8759 | 8.9159 | 16 | 4 | 13.5556 | 11.4114 |
| 0.0543 | 59.0 | 8555 | 2.5184 | 0.5071 | 0.2788 | 0.4515 | 0.4513 | 0.8766 | 0.8734 | 8.7177 | 16 | 4 | 13.2583 | 9.009 |
| 0.0518 | 60.0 | 8700 | 2.4945 | 0.5049 | 0.2754 | 0.4529 | 0.4529 | 0.8755 | 0.8749 | 8.9459 | 16 | 4 | 13.6787 | 10.8108 |
| 0.0541 | 61.0 | 8845 | 2.5282 | 0.4983 | 0.2693 | 0.4414 | 0.4403 | 0.8723 | 0.8726 | 8.973 | 16 | 4 | 13.6667 | 11.1111 |
| 0.0532 | 62.0 | 8990 | 2.5237 | 0.5007 | 0.2712 | 0.4464 | 0.4456 | 0.8741 | 0.8744 | 9.0541 | 16 | 4 | 13.7477 | 11.1111 |
| 0.0514 | 63.0 | 9135 | 2.5247 | 0.5041 | 0.2784 | 0.4525 | 0.452 | 0.8768 | 0.8735 | 8.7898 | 16 | 4 | 13.4144 | 8.7087 |
| 0.0516 | 64.0 | 9280 | 2.5289 | 0.5065 | 0.2826 | 0.4517 | 0.4515 | 0.8753 | 0.8745 | 9.042 | 16 | 4 | 13.6907 | 11.1111 |
| 0.0504 | 65.0 | 9425 | 2.5002 | 0.5055 | 0.2826 | 0.4565 | 0.4562 | 0.877 | 0.8724 | 8.6727 | 16 | 4 | 13.3123 | 7.5075 |
| 0.0479 | 66.0 | 9570 | 2.5361 | 0.503 | 0.2783 | 0.4529 | 0.4532 | 0.8756 | 0.874 | 8.8529 | 16 | 4 | 13.4865 | 8.1081 |
| 0.0515 | 67.0 | 9715 | 2.5260 | 0.5043 | 0.2758 | 0.451 | 0.4512 | 0.874 | 0.8748 | 9.0661 | 17 | 4 | 13.7808 | 10.5105 |
| 0.0544 | 68.0 | 9860 | 2.5213 | 0.5051 | 0.2846 | 0.4543 | 0.4545 | 0.8754 | 0.8739 | 8.9219 | 16 | 3 | 13.5586 | 10.5105 |
| 0.0445 | 69.0 | 10005 | 2.5543 | 0.5097 | 0.2859 | 0.4573 | 0.4577 | 0.878 | 0.8748 | 8.6937 | 16 | 3 | 13.3363 | 9.009 |
| 0.0484 | 70.0 | 10150 | 2.5472 | 0.5028 | 0.2791 | 0.4502 | 0.4503 | 0.8757 | 0.8736 | 8.8078 | 16 | 3 | 13.4264 | 7.5075 |
| 0.0437 | 71.0 | 10295 | 2.5621 | 0.5089 | 0.2851 | 0.4553 | 0.4556 | 0.8765 | 0.8742 | 8.8408 | 16 | 4 | 13.5105 | 8.7087 |
| 0.0473 | 72.0 | 10440 | 2.5503 | 0.5087 | 0.2818 | 0.4558 | 0.4555 | 0.8771 | 0.8743 | 8.8559 | 16 | 4 | 13.4204 | 8.7087 |
| 0.0472 | 73.0 | 10585 | 2.5726 | 0.5168 | 0.2866 | 0.4571 | 0.4577 | 0.8775 | 0.8761 | 8.9039 | 17 | 4 | 13.5285 | 9.6096 |
| 0.041 | 74.0 | 10730 | 2.5982 | 0.5137 | 0.2895 | 0.4594 | 0.4601 | 0.8769 | 0.8757 | 8.8709 | 16 | 4 | 13.4805 | 9.3093 |
| 0.0409 | 75.0 | 10875 | 2.5589 | 0.5058 | 0.2824 | 0.4553 | 0.4554 | 0.8766 | 0.8746 | 8.7898 | 16 | 4 | 13.3033 | 8.7087 |
| 0.0441 | 76.0 | 11020 | 2.5642 | 0.501 | 0.2791 | 0.452 | 0.4521 | 0.8763 | 0.8717 | 8.5225 | 16 | 4 | 13.048 | 6.006 |
| 0.0427 | 77.0 | 11165 | 2.5522 | 0.5102 | 0.2864 | 0.4573 | 0.4579 | 0.8784 | 0.8749 | 8.7207 | 17 | 4 | 13.3183 | 7.5075 |
| 0.0449 | 78.0 | 11310 | 2.5454 | 0.5071 | 0.2846 | 0.4567 | 0.4561 | 0.8775 | 0.875 | 8.7658 | 16 | 4 | 13.2523 | 7.5075 |
| 0.0397 | 79.0 | 11455 | 2.5598 | 0.5111 | 0.2863 | 0.4566 | 0.4569 | 0.8781 | 0.8752 | 8.7267 | 16 | 4 | 13.2973 | 7.2072 |
| 0.046 | 80.0 | 11600 | 2.5171 | 0.5063 | 0.2838 | 0.4541 | 0.4541 | 0.8768 | 0.8734 | 8.6456 | 16 | 4 | 13.2492 | 6.6066 |
| 0.0403 | 81.0 | 11745 | 2.5398 | 0.5154 | 0.2872 | 0.4584 | 0.4584 | 0.8774 | 0.876 | 8.9489 | 18 | 4 | 13.4955 | 8.7087 |
| 0.0407 | 82.0 | 11890 | 2.5526 | 0.5178 | 0.2904 | 0.4631 | 0.4632 | 0.8789 | 0.8769 | 8.8589 | 18 | 4 | 13.4354 | 7.5075 |
| 0.0414 | 83.0 | 12035 | 2.5718 | 0.5154 | 0.2876 | 0.4604 | 0.4609 | 0.8783 | 0.8749 | 8.7808 | 17 | 4 | 13.3303 | 7.5075 |
| 0.0406 | 84.0 | 12180 | 2.5673 | 0.5138 | 0.2861 | 0.4581 | 0.4587 | 0.8773 | 0.8758 | 8.8949 | 17 | 4 | 13.4895 | 8.1081 |
| 0.037 | 85.0 | 12325 | 2.5770 | 0.511 | 0.2873 | 0.4575 | 0.4573 | 0.8775 | 0.876 | 8.8559 | 16 | 4 | 13.4384 | 8.4084 |
| 0.0404 | 86.0 | 12470 | 2.5786 | 0.5145 | 0.2848 | 0.4578 | 0.4581 | 0.8774 | 0.8754 | 8.8649 | 16 | 4 | 13.4865 | 8.7087 |
| 0.0364 | 87.0 | 12615 | 2.5822 | 0.5089 | 0.2791 | 0.454 | 0.4539 | 0.8761 | 0.8743 | 8.8288 | 17 | 4 | 13.4174 | 7.8078 |
| 0.0365 | 88.0 | 12760 | 2.5821 | 0.5105 | 0.2806 | 0.4555 | 0.4559 | 0.8779 | 0.8752 | 8.7838 | 16 | 4 | 13.3634 | 7.8078 |
| 0.0359 | 89.0 | 12905 | 2.5798 | 0.5121 | 0.2787 | 0.4546 | 0.4549 | 0.8771 | 0.8753 | 8.8799 | 16 | 4 | 13.4835 | 8.4084 |
| 0.0349 | 90.0 | 13050 | 2.5960 | 0.5109 | 0.2788 | 0.4533 | 0.454 | 0.8775 | 0.8747 | 8.8108 | 16 | 4 | 13.3874 | 9.009 |
| 0.035 | 91.0 | 13195 | 2.5979 | 0.5072 | 0.2778 | 0.454 | 0.4539 | 0.8764 | 0.8743 | 8.8589 | 16 | 4 | 13.3964 | 9.6096 |
| 0.0355 | 92.0 | 13340 | 2.6016 | 0.5101 | 0.2795 | 0.4544 | 0.4548 | 0.8767 | 0.8743 | 8.8589 | 16 | 4 | 13.4505 | 9.009 |
| 0.0352 | 93.0 | 13485 | 2.6036 | 0.5107 | 0.2814 | 0.455 | 0.4554 | 0.8772 | 0.8747 | 8.8619 | 16 | 4 | 13.4294 | 9.009 |
| 0.0338 | 94.0 | 13630 | 2.6016 | 0.5065 | 0.2771 | 0.4512 | 0.4514 | 0.8758 | 0.8741 | 8.9249 | 16 | 4 | 13.5165 | 9.3093 |
| 0.0359 | 95.0 | 13775 | 2.6044 | 0.5071 | 0.2761 | 0.4496 | 0.4501 | 0.8755 | 0.8733 | 8.8559 | 16 | 4 | 13.4264 | 9.6096 |
| 0.0349 | 96.0 | 13920 | 2.5986 | 0.5072 | 0.277 | 0.4523 | 0.4524 | 0.8756 | 0.8736 | 8.8679 | 16 | 4 | 13.4655 | 9.6096 |
| 0.0358 | 97.0 | 14065 | 2.5994 | 0.5068 | 0.276 | 0.4498 | 0.4502 | 0.8749 | 0.8733 | 8.8589 | 16 | 4 | 13.4685 | 8.7087 |
| 0.0338 | 98.0 | 14210 | 2.6041 | 0.5105 | 0.2805 | 0.4536 | 0.4535 | 0.8761 | 0.8741 | 8.8498 | 16 | 4 | 13.4444 | 8.7087 |
| 0.0359 | 99.0 | 14355 | 2.6051 | 0.5095 | 0.2774 | 0.452 | 0.4522 | 0.876 | 0.8738 | 8.8529 | 16 | 4 | 13.4174 | 9.009 |
| 0.0357 | 100.0 | 14500 | 2.6052 | 0.5112 | 0.2802 | 0.4539 | 0.4538 | 0.8765 | 0.8742 | 8.8438 | 16 | 4 | 13.4174 | 8.7087 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Tensoic/Llama-2-7B-alpaca-2k-test-merged
|
Tensoic
| 2023-09-11T13:52:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:mhenrichsen/alpaca_2k_test",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-07T17:32:33Z |
---
datasets:
- mhenrichsen/alpaca_2k_test
---
We fine tune base `Llama-2-7b-hf` on the `henrichsen/alpaca_2k_test` dataset using peft-LORA.
Find adapters at: https://huggingface.co/Tensoic/Llama-2-7B-alpaca-2k-test
Visit us at: https://tensoic.com
## Training Setup:
```
Number of GPUs: 8x NVIDIA V100 GPUs
GPU Memory: 32GB each (SXM2 form factor)
```
## Training Configuration:
```yaml
base_model: meta-llama/Llama-2-7b-hf
base_model_config: meta-llama/Llama-2-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention: false
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
```
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
```
|
RickyIG/image_classification
|
RickyIG
| 2023-09-11T13:48:48Z | 215 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-11T13:39:57Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6283
- Accuracy: 0.886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7254 | 0.99 | 62 | 2.5418 | 0.819 |
| 1.8131 | 2.0 | 125 | 1.8025 | 0.852 |
| 1.5991 | 2.98 | 186 | 1.6367 | 0.889 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
facebook/mask2former-swin-base-ade-semantic
|
facebook
| 2023-09-11T13:46:21Z | 1,503 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-01-05T12:23:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
|
facebook/mbart-large-en-ro
|
facebook
| 2023-09-11T13:45:59Z | 11,496 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"mbart",
"translation",
"en",
"ro",
"license:mit",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
tags:
- translation
language:
- en
- ro
license: mit
---
### mbart-large-en-ro
This is mbart-large-cc25, finetuned on wmt_en_ro.
It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in `romanian_postprocessing.md`
Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart
Docs: https://huggingface.co/transformers/master/model_doc/mbart.html
Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
|
davanstrien/detr-resnet-50_find_tuned_beyond_words
|
davanstrien
| 2023-09-11T13:45:54Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:beyond_words_23",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-02-27T22:50:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beyond_words_23
base_model: facebook/detr-resnet-50
model-index:
- name: detr-resnet-50_find_tuned_beyond_words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_find_tuned_beyond_words
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the beyond_words_23 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7439 | 0.56 | 100 | 2.2690 |
| 1.7644 | 1.12 | 200 | 1.5053 |
| 1.557 | 1.69 | 300 | 1.3136 |
| 1.3207 | 2.25 | 400 | 1.2063 |
| 1.3705 | 2.81 | 500 | 1.2007 |
| 1.1924 | 3.37 | 600 | 1.2704 |
| 1.2604 | 3.93 | 700 | 1.1784 |
| 1.1982 | 4.49 | 800 | 1.1167 |
| 1.1912 | 5.06 | 900 | 1.1562 |
| 1.1206 | 5.62 | 1000 | 1.2124 |
| 1.1344 | 6.18 | 1100 | 1.0622 |
| 1.1388 | 6.74 | 1200 | 1.0425 |
| 1.0124 | 7.3 | 1300 | 0.9908 |
| 1.0776 | 7.87 | 1400 | 1.1182 |
| 0.9614 | 8.43 | 1500 | 0.9967 |
| 1.0136 | 8.99 | 1600 | 0.8933 |
| 1.0206 | 9.55 | 1700 | 0.9354 |
| 0.9529 | 10.11 | 1800 | 0.9751 |
| 1.0126 | 10.67 | 1900 | 0.9310 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
flyswot/test2
|
flyswot
| 2023-09-11T13:45:47Z | 265 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:flyswot/convnext-tiny-224_flyswot",
"base_model:finetune:flyswot/convnext-tiny-224_flyswot",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-15T10:46:33Z |
---
tags:
- generated_from_trainer
base_model: flyswot/convnext-tiny-224_flyswot
model-index:
- name: test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test2
This model is a fine-tuned version of [flyswot/convnext-tiny-224_flyswot](https://huggingface.co/flyswot/convnext-tiny-224_flyswot) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.1 | 23 | 0.1128 | 0.9787 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
davanstrien/convnext_flyswot
|
davanstrien
| 2023-09-11T13:44:59Z | 248 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/convnext-base-224-22k",
"base_model:finetune:facebook/convnext-base-224-22k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
base_model: facebook/convnext-base-224-22k
model-index:
- name: convnext_flyswot
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: f1
value: 0.959245529738118
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_flyswot
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- F1: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 0.6833 | 0.7484 |
| No log | 2.0 | 104 | 0.3666 | 0.8750 |
| No log | 3.0 | 156 | 0.2090 | 0.9321 |
| No log | 4.0 | 208 | 0.1478 | 0.9449 |
| No log | 5.0 | 260 | 0.1002 | 0.9518 |
| No log | 6.0 | 312 | 0.1053 | 0.9506 |
| No log | 7.0 | 364 | 0.1182 | 0.9616 |
| No log | 8.0 | 416 | 0.1102 | 0.9592 |
| No log | 9.0 | 468 | 0.1262 | 0.9616 |
| 0.203 | 10.0 | 520 | 0.1286 | 0.9616 |
| 0.203 | 11.0 | 572 | 0.1355 | 0.9592 |
| 0.203 | 12.0 | 624 | 0.1299 | 0.9592 |
| 0.203 | 13.0 | 676 | 0.1154 | 0.9592 |
| 0.203 | 14.0 | 728 | 0.1385 | 0.9580 |
| 0.203 | 15.0 | 780 | 0.1330 | 0.9592 |
| 0.203 | 16.0 | 832 | 0.1390 | 0.9592 |
| 0.203 | 17.0 | 884 | 0.1386 | 0.9592 |
| 0.203 | 18.0 | 936 | 0.1390 | 0.9592 |
| 0.203 | 19.0 | 988 | 0.1409 | 0.9592 |
| 0.0006 | 20.0 | 1040 | 0.1411 | 0.9592 |
| 0.0006 | 21.0 | 1092 | 0.1413 | 0.9592 |
| 0.0006 | 22.0 | 1144 | 0.1415 | 0.9592 |
| 0.0006 | 23.0 | 1196 | 0.1426 | 0.9592 |
| 0.0006 | 24.0 | 1248 | 0.1435 | 0.9592 |
| 0.0006 | 25.0 | 1300 | 0.1438 | 0.9592 |
| 0.0006 | 26.0 | 1352 | 0.1434 | 0.9592 |
| 0.0006 | 27.0 | 1404 | 0.1437 | 0.9592 |
| 0.0006 | 28.0 | 1456 | 0.1441 | 0.9592 |
| 0.0002 | 29.0 | 1508 | 0.1440 | 0.9592 |
| 0.0002 | 30.0 | 1560 | 0.1441 | 0.9592 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
davanstrien/detr-resnet-50_fine_tuned_trade_dir
|
davanstrien
| 2023-09-11T13:44:46Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-12-07T16:09:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/detr-resnet-50
model-index:
- name: detr-resnet-50_fine_tuned_trade_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_fine_tuned_trade_dir
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
davanstrien/convnext-tiny-224-wikiart
|
davanstrien
| 2023-09-11T13:44:37Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:wiki_art",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-21T12:54:11Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- wiki_art
metrics:
- accuracy
base_model: facebook/convnext-tiny-224
model-index:
- name: convnext-tiny-224-wikiart
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: huggan/wikiart
type: wiki_art
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.7140050748956372
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-wikiart
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the huggan/wikiart dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8022
- Accuracy: 0.7140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9779 | 1.0 | 8654 | 0.9191 | 0.6743 |
| 0.9959 | 2.0 | 17308 | 0.8523 | 0.6941 |
| 1.0344 | 3.0 | 25962 | 0.8277 | 0.7023 |
| 0.8853 | 4.0 | 34616 | 0.8126 | 0.7100 |
| 0.9557 | 5.0 | 43270 | 0.8022 | 0.7140 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
davanstrien/vit-manuscripts
|
davanstrien
| 2023-09-11T13:44:14Z | 72 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"masked-auto-encoding",
"generated_from_trainer",
"base_model:facebook/vit-mae-base",
"base_model:finetune:facebook/vit-mae-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
base_model: facebook/vit-mae-base
model-index:
- name: vit-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-manuscripts
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/manuscript_iiif_test dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5303 | 1.0 | 34 | 0.5134 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
davanstrien/iiif_manuscript_vit
|
davanstrien
| 2023-09-11T13:44:01Z | 251 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: iiif_manuscript_vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iiif_manuscript_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- F1: 0.5996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5639 | 1.0 | 2269 | 0.5822 | 0.5516 |
| 0.5834 | 2.0 | 4538 | 0.5825 | 0.5346 |
| 0.5778 | 3.0 | 6807 | 0.5794 | 0.6034 |
| 0.5735 | 4.0 | 9076 | 0.5742 | 0.5713 |
| 0.5731 | 5.0 | 11345 | 0.5745 | 0.6008 |
| 0.5701 | 6.0 | 13614 | 0.5729 | 0.5499 |
| 0.5696 | 7.0 | 15883 | 0.5717 | 0.5952 |
| 0.5683 | 8.0 | 18152 | 0.5680 | 0.6005 |
| 0.5648 | 9.0 | 20421 | 0.5679 | 0.5967 |
| 0.564 | 10.0 | 22690 | 0.5684 | 0.5996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
davanstrien/dit-base-manuscripts
|
davanstrien
| 2023-09-11T13:43:46Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deit",
"masked-image-modeling",
"generated_from_trainer",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-08T17:22:08Z |
---
license: apache-2.0
tags:
- masked-image-modeling
- generated_from_trainer
base_model: facebook/deit-base-distilled-patch16-224
model-index:
- name: dit-base-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-manuscripts
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1396 | 1.0 | 32 | 1.1261 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/vit-base-patch16-224-in21k-base-manuscripts
|
davanstrien
| 2023-09-11T13:43:35Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"masked-image-modeling",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-10T07:44:17Z |
---
license: apache-2.0
tags:
- masked-image-modeling
- generated_from_trainer
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-base-patch16-224-in21k-base-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-base-manuscripts
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5198 | 1.0 | 32 | 0.5208 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/test_mae_flysheet
|
davanstrien
| 2023-09-11T13:43:28Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"masked-auto-encoding",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/vit-mae-base",
"base_model:finetune:facebook/vit-mae-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-13T15:30:34Z |
---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
datasets:
- image_folder
base_model: facebook/vit-mae-base
model-index:
- name: test_mae_flysheet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_mae_flysheet
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/flysheet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.284 | 1.0 | 28 | 2.2812 |
| 2.137 | 2.0 | 56 | 2.0288 |
| 1.6016 | 3.0 | 84 | 1.2437 |
| 0.8055 | 4.0 | 112 | 0.7419 |
| 0.5304 | 5.0 | 140 | 0.5151 |
| 0.4873 | 6.0 | 168 | 0.4884 |
| 0.442 | 7.0 | 196 | 0.4441 |
| 0.4039 | 8.0 | 224 | 0.4159 |
| 0.3866 | 9.0 | 252 | 0.3975 |
| 0.391 | 10.0 | 280 | 0.3869 |
| 0.3549 | 11.0 | 308 | 0.3801 |
| 0.3462 | 12.0 | 336 | 0.3577 |
| 0.3402 | 13.0 | 364 | 0.3519 |
| 0.3357 | 14.0 | 392 | 0.3447 |
| 0.3474 | 15.0 | 420 | 0.3369 |
| 0.3254 | 16.0 | 448 | 0.3386 |
| 0.3033 | 17.0 | 476 | 0.3294 |
| 0.3047 | 18.0 | 504 | 0.3274 |
| 0.3103 | 19.0 | 532 | 0.3209 |
| 0.3067 | 20.0 | 560 | 0.3186 |
| 0.2959 | 21.0 | 588 | 0.3190 |
| 0.2899 | 22.0 | 616 | 0.3147 |
| 0.2872 | 23.0 | 644 | 0.3082 |
| 0.2956 | 24.0 | 672 | 0.3070 |
| 0.2865 | 25.0 | 700 | 0.3072 |
| 0.2947 | 26.0 | 728 | 0.3072 |
| 0.2811 | 27.0 | 756 | 0.3131 |
| 0.2935 | 28.0 | 784 | 0.3069 |
| 0.2814 | 29.0 | 812 | 0.3043 |
| 0.2753 | 30.0 | 840 | 0.2984 |
| 0.2823 | 31.0 | 868 | 0.2995 |
| 0.2962 | 32.0 | 896 | 0.3012 |
| 0.2869 | 33.0 | 924 | 0.3050 |
| 0.2833 | 34.0 | 952 | 0.2960 |
| 0.2892 | 35.0 | 980 | 0.3039 |
| 0.2764 | 36.0 | 1008 | 0.3010 |
| 0.2807 | 37.0 | 1036 | 0.2998 |
| 0.2843 | 38.0 | 1064 | 0.2989 |
| 0.2808 | 39.0 | 1092 | 0.2970 |
| 0.2862 | 40.0 | 1120 | 0.2940 |
| 0.2601 | 41.0 | 1148 | 0.2952 |
| 0.2742 | 42.0 | 1176 | 0.2940 |
| 0.2791 | 43.0 | 1204 | 0.2997 |
| 0.2759 | 44.0 | 1232 | 0.2951 |
| 0.2819 | 45.0 | 1260 | 0.2896 |
| 0.287 | 46.0 | 1288 | 0.2938 |
| 0.2711 | 47.0 | 1316 | 0.2973 |
| 0.2782 | 48.0 | 1344 | 0.2946 |
| 0.2674 | 49.0 | 1372 | 0.2913 |
| 0.268 | 50.0 | 1400 | 0.2944 |
| 0.2624 | 51.0 | 1428 | 0.2940 |
| 0.2842 | 52.0 | 1456 | 0.2978 |
| 0.2753 | 53.0 | 1484 | 0.2951 |
| 0.2733 | 54.0 | 1512 | 0.2880 |
| 0.2782 | 55.0 | 1540 | 0.2969 |
| 0.2789 | 56.0 | 1568 | 0.2919 |
| 0.2815 | 57.0 | 1596 | 0.2916 |
| 0.2629 | 58.0 | 1624 | 0.2947 |
| 0.2716 | 59.0 | 1652 | 0.2828 |
| 0.2623 | 60.0 | 1680 | 0.2924 |
| 0.2773 | 61.0 | 1708 | 0.2765 |
| 0.268 | 62.0 | 1736 | 0.2754 |
| 0.2839 | 63.0 | 1764 | 0.2744 |
| 0.2684 | 64.0 | 1792 | 0.2744 |
| 0.2865 | 65.0 | 1820 | 0.2716 |
| 0.2845 | 66.0 | 1848 | 0.2769 |
| 0.2663 | 67.0 | 1876 | 0.2754 |
| 0.269 | 68.0 | 1904 | 0.2737 |
| 0.2681 | 69.0 | 1932 | 0.2697 |
| 0.2748 | 70.0 | 1960 | 0.2779 |
| 0.2769 | 71.0 | 1988 | 0.2728 |
| 0.2805 | 72.0 | 2016 | 0.2729 |
| 0.2771 | 73.0 | 2044 | 0.2728 |
| 0.2717 | 74.0 | 2072 | 0.2749 |
| 0.267 | 75.0 | 2100 | 0.2732 |
| 0.2812 | 76.0 | 2128 | 0.2743 |
| 0.2749 | 77.0 | 2156 | 0.2739 |
| 0.2746 | 78.0 | 2184 | 0.2730 |
| 0.2707 | 79.0 | 2212 | 0.2743 |
| 0.2644 | 80.0 | 2240 | 0.2740 |
| 0.2691 | 81.0 | 2268 | 0.2727 |
| 0.2679 | 82.0 | 2296 | 0.2771 |
| 0.2748 | 83.0 | 2324 | 0.2744 |
| 0.2744 | 84.0 | 2352 | 0.2703 |
| 0.2715 | 85.0 | 2380 | 0.2733 |
| 0.2682 | 86.0 | 2408 | 0.2715 |
| 0.2641 | 87.0 | 2436 | 0.2722 |
| 0.274 | 88.0 | 2464 | 0.2748 |
| 0.2669 | 89.0 | 2492 | 0.2753 |
| 0.2707 | 90.0 | 2520 | 0.2724 |
| 0.2755 | 91.0 | 2548 | 0.2703 |
| 0.2769 | 92.0 | 2576 | 0.2737 |
| 0.2659 | 93.0 | 2604 | 0.2721 |
| 0.2674 | 94.0 | 2632 | 0.2763 |
| 0.2723 | 95.0 | 2660 | 0.2723 |
| 0.2723 | 96.0 | 2688 | 0.2744 |
| 0.272 | 97.0 | 2716 | 0.2686 |
| 0.27 | 98.0 | 2744 | 0.2728 |
| 0.2721 | 99.0 | 2772 | 0.2743 |
| 0.2692 | 100.0 | 2800 | 0.2748 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/convnext-tiny-224-leicester_binary
|
davanstrien
| 2023-09-11T13:43:16Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-06T16:45:11Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
base_model: facebook/convnext-tiny-224
model-index:
- name: convnext-tiny-224-leicester_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-leicester_binary
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the davanstrien/leicester_loaded_annotations_binary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4213
- Precision: 0.4583
- Recall: 0.5
- F1: 0.4783
- Accuracy: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 0.4213 | 0.4583 | 0.5 | 0.4783 | 0.9167 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
davanstrien/convnext-small-224-leicester_binary
|
davanstrien
| 2023-09-11T13:43:10Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:facebook/convnext-small-224",
"base_model:finetune:facebook/convnext-small-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-06T16:56:52Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- f1
base_model: facebook/convnext-small-224
model-index:
- name: convnext-small-224-leicester_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-small-224-leicester_binary
This model is a fine-tuned version of [facebook/convnext-small-224](https://huggingface.co/facebook/convnext-small-224) on the davanstrien/leicester_loaded_annotations_binary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1283
- F1: 0.9620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 7 | 0.5143 | 0.8608 |
| 0.5872 | 2.0 | 14 | 0.4215 | 0.8608 |
| 0.3903 | 3.0 | 21 | 0.4127 | 0.8608 |
| 0.3903 | 4.0 | 28 | 0.3605 | 0.8608 |
| 0.3163 | 5.0 | 35 | 0.3152 | 0.8608 |
| 0.2942 | 6.0 | 42 | 0.2942 | 0.8608 |
| 0.2942 | 7.0 | 49 | 0.2669 | 0.8608 |
| 0.2755 | 8.0 | 56 | 0.2316 | 0.8608 |
| 0.2281 | 9.0 | 63 | 0.2104 | 0.8608 |
| 0.2076 | 10.0 | 70 | 0.1938 | 0.8608 |
| 0.2076 | 11.0 | 77 | 0.1803 | 0.8608 |
| 0.1832 | 12.0 | 84 | 0.1704 | 0.8608 |
| 0.1758 | 13.0 | 91 | 0.1650 | 0.8608 |
| 0.1758 | 14.0 | 98 | 0.1714 | 0.8608 |
| 0.167 | 15.0 | 105 | 0.1575 | 0.8608 |
| 0.1519 | 16.0 | 112 | 0.1549 | 0.8608 |
| 0.1519 | 17.0 | 119 | 0.1705 | 0.8608 |
| 0.1422 | 18.0 | 126 | 0.1478 | 0.8608 |
| 0.1444 | 19.0 | 133 | 0.1437 | 0.8608 |
| 0.1396 | 20.0 | 140 | 0.1398 | 0.8608 |
| 0.1396 | 21.0 | 147 | 0.1351 | 0.8608 |
| 0.1293 | 22.0 | 154 | 0.1370 | 0.8987 |
| 0.1361 | 23.0 | 161 | 0.1335 | 0.8987 |
| 0.1361 | 24.0 | 168 | 0.1311 | 0.9367 |
| 0.1246 | 25.0 | 175 | 0.1289 | 0.9620 |
| 0.1211 | 26.0 | 182 | 0.1283 | 0.9620 |
| 0.1211 | 27.0 | 189 | 0.1294 | 0.9620 |
| 0.1182 | 28.0 | 196 | 0.1306 | 0.9620 |
| 0.1172 | 29.0 | 203 | 0.1312 | 0.9620 |
| 0.1102 | 30.0 | 210 | 0.1318 | 0.9620 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
davanstrien/autotrain-dataset-mentions-3390592983
|
davanstrien
| 2023-09-11T13:42:56Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:davanstrien/autotrain-data-dataset-mentions",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-10T11:19:48Z |
---
language:
- en
tags:
- autotrain
- text-classification
datasets:
- davanstrien/autotrain-data-dataset-mentions
widget:
- text: ' frases-bertimbau-v0.4 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
on an unknown dataset.'
- text: Model description BERTa is a transformer-based masked language model for the
Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta)
base model and has been trained on a medium-size corpus collected from publicly
available corpora and crawlers
- text: Model description More information needed
co2_eq_emissions:
emissions: 0.008999666562870793
base_model: neuralmind/bert-base-portuguese-cased
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3390592983
- CO2 Emissions (in grams): 0.0090
## Validation Metrics
- Loss: 0.014
- Accuracy: 0.997
- Precision: 0.998
- Recall: 0.997
- AUC: 1.000
- F1: 0.998
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-dataset-mentions-3390592983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592983", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
kartiks26/Llama2-7B
|
kartiks26
| 2023-09-11T13:41:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-11T13:39:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
Zekrom997/image_classification
|
Zekrom997
| 2023-09-11T13:38:55Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-11T13:10:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6302
- Accuracy: 0.883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7166 | 0.99 | 62 | 2.5345 | 0.842 |
| 1.7982 | 2.0 | 125 | 1.7848 | 0.876 |
| 1.5772 | 2.98 | 186 | 1.6252 | 0.894 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg_ACE-arg
|
HiTZ
| 2023-09-11T13:36:11Z | 114 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-05-02T12:08:43Z |
---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
```
|
bigmorning/whisper_4_with_init_sun_syl_wd_0__0085
|
bigmorning
| 2023-09-11T13:34:24Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-11T13:34:17Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0__0085
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0__0085
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2122
- Train Accuracy: 0.0345
- Train Wermet: 0.0284
- Train Wermet Syl: 0.0346
- Validation Loss: 1.2518
- Validation Accuracy: 0.0208
- Validation Wermet: 0.3241
- Validation Wermet Syl: 0.2884
- Epoch: 84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.3409 | 0.0111 | 1.3547 | 1.2898 | 3.9789 | 0.0114 | 0.9710 | 0.9563 | 0 |
| 4.7143 | 0.0116 | 0.8622 | 0.8228 | 3.9404 | 0.0113 | 0.9823 | 0.9735 | 1 |
| 4.6752 | 0.0117 | 0.8472 | 0.8057 | 3.9081 | 0.0114 | 0.9579 | 0.9359 | 2 |
| 4.6500 | 0.0117 | 0.8382 | 0.7945 | 3.8820 | 0.0115 | 0.9213 | 0.8856 | 3 |
| 4.6282 | 0.0118 | 0.8286 | 0.7805 | 3.8738 | 0.0114 | 0.9433 | 0.9119 | 4 |
| 4.6095 | 0.0118 | 0.8190 | 0.7696 | 3.8630 | 0.0115 | 0.9117 | 0.8698 | 5 |
| 4.5875 | 0.0119 | 0.7976 | 0.7465 | 3.8341 | 0.0116 | 0.8976 | 0.8552 | 6 |
| 4.5682 | 0.0120 | 0.7753 | 0.7227 | 3.8277 | 0.0116 | 0.9014 | 0.8653 | 7 |
| 4.5376 | 0.0121 | 0.7528 | 0.7005 | 3.7844 | 0.0118 | 0.8332 | 0.7815 | 8 |
| 4.5060 | 0.0122 | 0.7392 | 0.6844 | 3.7537 | 0.0118 | 0.8578 | 0.8152 | 9 |
| 4.4580 | 0.0124 | 0.7221 | 0.6694 | 3.7038 | 0.0120 | 0.8190 | 0.7679 | 10 |
| 4.3989 | 0.0125 | 0.7156 | 0.6636 | 3.6169 | 0.0122 | 0.7979 | 0.7429 | 11 |
| 4.3056 | 0.0128 | 0.7069 | 0.6557 | 3.5098 | 0.0125 | 0.7924 | 0.7396 | 12 |
| 4.1673 | 0.0132 | 0.7054 | 0.6584 | 3.3542 | 0.0128 | 0.7759 | 0.7240 | 13 |
| 3.9762 | 0.0138 | 0.6987 | 0.6559 | 3.1318 | 0.0133 | 0.7644 | 0.7231 | 14 |
| 3.7385 | 0.0145 | 0.6835 | 0.6448 | 2.9144 | 0.0138 | 0.7392 | 0.6955 | 15 |
| 3.5040 | 0.0152 | 0.6644 | 0.6298 | 2.7413 | 0.0142 | 0.7019 | 0.6548 | 16 |
| 3.2728 | 0.0160 | 0.6408 | 0.6101 | 2.5183 | 0.0149 | 0.6798 | 0.6363 | 17 |
| 3.0657 | 0.0167 | 0.6188 | 0.5912 | 2.3594 | 0.0153 | 0.6528 | 0.6103 | 18 |
| 2.8703 | 0.0174 | 0.5936 | 0.5685 | 2.2644 | 0.0156 | 0.6310 | 0.5925 | 19 |
| 2.6850 | 0.0181 | 0.5680 | 0.5453 | 2.1296 | 0.0160 | 0.6040 | 0.5652 | 20 |
| 2.5227 | 0.0188 | 0.5423 | 0.5215 | 2.0019 | 0.0165 | 0.5793 | 0.5403 | 21 |
| 2.3878 | 0.0194 | 0.5199 | 0.5015 | 1.8996 | 0.0169 | 0.5592 | 0.5229 | 22 |
| 2.2437 | 0.0201 | 0.4959 | 0.4788 | 1.8141 | 0.0172 | 0.5414 | 0.5045 | 23 |
| 2.1205 | 0.0207 | 0.4752 | 0.4607 | 1.7245 | 0.0175 | 0.5208 | 0.4838 | 24 |
| 1.9919 | 0.0213 | 0.4533 | 0.4390 | 1.6673 | 0.0178 | 0.5026 | 0.4659 | 25 |
| 1.9140 | 0.0217 | 0.4355 | 0.4216 | 1.6041 | 0.0181 | 0.4873 | 0.4512 | 26 |
| 1.8225 | 0.0222 | 0.4184 | 0.4052 | 1.6271 | 0.0179 | 0.4852 | 0.4511 | 27 |
| 1.7265 | 0.0227 | 0.4016 | 0.3895 | 1.5219 | 0.0184 | 0.4635 | 0.4275 | 28 |
| 1.6240 | 0.0233 | 0.3833 | 0.3729 | 1.4718 | 0.0186 | 0.4515 | 0.4170 | 29 |
| 1.5610 | 0.0236 | 0.3697 | 0.3588 | 1.4404 | 0.0188 | 0.4407 | 0.4056 | 30 |
| 1.4719 | 0.0242 | 0.3540 | 0.3449 | 1.4125 | 0.0189 | 0.4310 | 0.3961 | 31 |
| 1.4152 | 0.0245 | 0.3421 | 0.3339 | 1.3655 | 0.0191 | 0.4234 | 0.3881 | 32 |
| 1.3546 | 0.0249 | 0.3277 | 0.3195 | 1.3419 | 0.0192 | 0.4156 | 0.3816 | 33 |
| 1.2565 | 0.0256 | 0.3135 | 0.3060 | 1.3172 | 0.0194 | 0.4065 | 0.3722 | 34 |
| 1.2135 | 0.0258 | 0.3026 | 0.2958 | 1.3019 | 0.0194 | 0.4006 | 0.3662 | 35 |
| 1.1739 | 0.0261 | 0.2923 | 0.2861 | 1.3843 | 0.0190 | 0.3951 | 0.3587 | 36 |
| 1.0950 | 0.0267 | 0.2782 | 0.2733 | 1.2665 | 0.0197 | 0.3883 | 0.3541 | 37 |
| 1.0435 | 0.0271 | 0.2673 | 0.2631 | 1.2567 | 0.0197 | 0.3837 | 0.3497 | 38 |
| 0.9922 | 0.0275 | 0.2580 | 0.2542 | 1.2566 | 0.0197 | 0.3801 | 0.3444 | 39 |
| 0.9387 | 0.0279 | 0.2464 | 0.2438 | 1.2441 | 0.0198 | 0.3767 | 0.3423 | 40 |
| 0.9345 | 0.0278 | 0.2393 | 0.2373 | 1.2221 | 0.0199 | 0.3682 | 0.3336 | 41 |
| 0.8574 | 0.0285 | 0.2268 | 0.2255 | 1.2258 | 0.0199 | 0.3680 | 0.3338 | 42 |
| 0.8275 | 0.0287 | 0.2183 | 0.2180 | 1.2044 | 0.0201 | 0.3628 | 0.3290 | 43 |
| 0.8201 | 0.0288 | 0.2114 | 0.2108 | 1.2056 | 0.0201 | 0.3601 | 0.3270 | 44 |
| 0.7684 | 0.0292 | 0.2020 | 0.2029 | 1.1879 | 0.0202 | 0.3553 | 0.3215 | 45 |
| 0.7262 | 0.0295 | 0.1938 | 0.1947 | 1.2263 | 0.0200 | 0.3537 | 0.3177 | 46 |
| 0.7286 | 0.0295 | 0.1876 | 0.1898 | 1.1772 | 0.0203 | 0.3485 | 0.3135 | 47 |
| 0.6807 | 0.0300 | 0.1775 | 0.1797 | 1.1761 | 0.0203 | 0.3490 | 0.3155 | 48 |
| 0.6609 | 0.0301 | 0.1713 | 0.1742 | 1.1853 | 0.0203 | 0.3484 | 0.3153 | 49 |
| 0.6062 | 0.0306 | 0.1615 | 0.1653 | 1.1660 | 0.0204 | 0.3432 | 0.3090 | 50 |
| 0.5755 | 0.0309 | 0.1547 | 0.1584 | 1.1698 | 0.0204 | 0.3428 | 0.3089 | 51 |
| 0.5600 | 0.0310 | 0.1482 | 0.1524 | 1.1667 | 0.0204 | 0.3398 | 0.3058 | 52 |
| 0.5715 | 0.0308 | 0.1449 | 0.1496 | 1.1614 | 0.0205 | 0.3381 | 0.3036 | 53 |
| 0.5247 | 0.0313 | 0.1358 | 0.1411 | 1.1639 | 0.0205 | 0.3359 | 0.3025 | 54 |
| 0.5085 | 0.0315 | 0.1301 | 0.1358 | 1.2420 | 0.0202 | 0.3412 | 0.3064 | 55 |
| 0.4827 | 0.0317 | 0.1239 | 0.1295 | 1.1677 | 0.0205 | 0.3349 | 0.3009 | 56 |
| 0.4848 | 0.0317 | 0.1207 | 0.1280 | 1.1653 | 0.0205 | 0.3326 | 0.2991 | 57 |
| 0.4323 | 0.0322 | 0.1109 | 0.1185 | 1.1602 | 0.0206 | 0.3299 | 0.2953 | 58 |
| 0.4183 | 0.0323 | 0.1057 | 0.1133 | 1.1622 | 0.0206 | 0.3307 | 0.2962 | 59 |
| 0.4329 | 0.0322 | 0.1028 | 0.1100 | 1.1714 | 0.0206 | 0.3300 | 0.2950 | 60 |
| 0.3962 | 0.0326 | 0.0964 | 0.1045 | 1.1726 | 0.0206 | 0.3311 | 0.2967 | 61 |
| 0.3642 | 0.0329 | 0.0898 | 0.0973 | 1.1699 | 0.0206 | 0.3289 | 0.2936 | 62 |
| 0.3786 | 0.0327 | 0.0884 | 0.0963 | 1.1734 | 0.0206 | 0.3279 | 0.2929 | 63 |
| 0.3698 | 0.0328 | 0.0842 | 0.0925 | 1.1728 | 0.0207 | 0.3282 | 0.2932 | 64 |
| 0.3219 | 0.0333 | 0.0765 | 0.0850 | 1.1830 | 0.0207 | 0.3258 | 0.2907 | 65 |
| 0.3035 | 0.0335 | 0.0725 | 0.0811 | 1.1840 | 0.0207 | 0.3261 | 0.2904 | 66 |
| 0.3522 | 0.0330 | 0.0745 | 0.0826 | 1.2107 | 0.0206 | 0.3299 | 0.2955 | 67 |
| 0.3001 | 0.0335 | 0.0663 | 0.0749 | 1.1810 | 0.0207 | 0.3264 | 0.2909 | 68 |
| 0.2729 | 0.0338 | 0.0595 | 0.0677 | 1.1911 | 0.0207 | 0.3247 | 0.2886 | 69 |
| 0.2696 | 0.0338 | 0.0572 | 0.0654 | 1.1950 | 0.0207 | 0.3260 | 0.2905 | 70 |
| 0.2840 | 0.0337 | 0.0563 | 0.0648 | 1.2094 | 0.0207 | 0.3250 | 0.2887 | 71 |
| 0.2319 | 0.0342 | 0.0484 | 0.0569 | 1.2107 | 0.0207 | 0.3250 | 0.2878 | 72 |
| 0.2371 | 0.0342 | 0.0464 | 0.0541 | 1.2059 | 0.0207 | 0.3240 | 0.2880 | 73 |
| 0.2666 | 0.0338 | 0.0486 | 0.0575 | 1.2036 | 0.0207 | 0.3241 | 0.2887 | 74 |
| 0.2443 | 0.0340 | 0.0442 | 0.0522 | 1.2106 | 0.0207 | 0.3241 | 0.2877 | 75 |
| 0.2118 | 0.0344 | 0.0380 | 0.0456 | 1.2172 | 0.0207 | 0.3240 | 0.2871 | 76 |
| 0.1997 | 0.0346 | 0.0354 | 0.0428 | 1.2247 | 0.0208 | 0.3219 | 0.2852 | 77 |
| 0.2461 | 0.0341 | 0.0386 | 0.0466 | 1.2257 | 0.0207 | 0.3240 | 0.2874 | 78 |
| 0.2367 | 0.0342 | 0.0364 | 0.0431 | 1.2173 | 0.0208 | 0.3234 | 0.2870 | 79 |
| 0.1857 | 0.0347 | 0.0294 | 0.0365 | 1.2287 | 0.0208 | 0.3244 | 0.2876 | 80 |
| 0.1504 | 0.0351 | 0.0244 | 0.0314 | 1.2425 | 0.0207 | 0.3238 | 0.2871 | 81 |
| 0.1438 | 0.0352 | 0.0227 | 0.0287 | 1.2495 | 0.0208 | 0.3222 | 0.2861 | 82 |
| 0.1545 | 0.0350 | 0.0232 | 0.0288 | 1.2612 | 0.0207 | 0.3257 | 0.2898 | 83 |
| 0.2122 | 0.0345 | 0.0284 | 0.0346 | 1.2518 | 0.0208 | 0.3241 | 0.2884 | 84 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ixa-ehu/roberta-eus-cc100-base-cased
|
ixa-ehu
| 2023-09-11T13:33:41Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T09:47:37Z |
---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus cc100 base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kensvin/audio_classification
|
kensvin
| 2023-09-11T13:31:00Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-11T13:27:41Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07079646017699115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6513
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6439 | 0.0531 |
| No log | 1.87 | 7 | 2.6446 | 0.0708 |
| 2.6349 | 2.93 | 11 | 2.6484 | 0.0885 |
| 2.6349 | 4.0 | 15 | 2.6497 | 0.0885 |
| 2.6349 | 4.8 | 18 | 2.6509 | 0.0796 |
| 2.6233 | 5.87 | 22 | 2.6513 | 0.0708 |
| 2.6233 | 6.93 | 26 | 2.6515 | 0.0708 |
| 2.612 | 8.0 | 30 | 2.6513 | 0.0708 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
sanchit-gandhi/whisper-medium-fleurs-lang-id
|
sanchit-gandhi
| 2023-09-11T13:25:16Z | 128,294 | 14 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:xtreme_s",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-02-23T13:37:22Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- accuracy
base_model: openai/whisper-medium
model-index:
- name: whisper-medium-fleurs-lang-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium FLEURS Language Identification
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [FLEURS subset](https://huggingface.co/datasets/google/xtreme_s#language-identification---fleurs-langid) of the [google/xtreme_s](https://huggingface.co/google/xtreme_s) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8413
- Accuracy: 0.8805
To reproduce this run, execute the command in [`run.sh`](https://huggingface.co/sanchit-gandhi/whisper-medium-fleurs-lang-id/blob/main/run.sh).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 0
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0152 | 1.0 | 8494 | 0.9087 | 0.8431 |
| 0.0003 | 2.0 | 16988 | 1.0059 | 0.8460 |
| 0.0 | 3.0 | 25482 | 0.8413 | 0.8805 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
SCUT-DLVCLab/lilt-infoxlm-base
|
SCUT-DLVCLab
| 2023-09-11T13:20:42Z | 828 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"lilt",
"feature-extraction",
"vision",
"arxiv:2202.13669",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-10T14:19:02Z |
---
license: mit
tags:
- vision
---
# LiLT-InfoXLM (base-sized model)
Language-Independent Layout Transformer - InfoXLM model by stitching a pre-trained InfoXLM and a pre-trained Language-Independent Layout Transformer (LiLT) together. It was introduced in the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Wang et al. and first released in [this repository](https://github.com/jpwang/lilt).
Disclaimer: The team releasing LiLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Language-Independent Layout Transformer (LiLT) allows to combine any pre-trained RoBERTa encoder from the hub (hence, in any language) with a lightweight Layout Transformer to have a LayoutLM-like model for any language.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/lilt_architecture.jpg" alt="drawing" width="600"/>
## Intended uses & limitations
The model is meant to be fine-tuned on tasks like document image classification, document parsing and document QA. See the [model hub](https://huggingface.co/models?search=lilt) to look for fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/lilt.html).
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.13669,
doi = {10.48550/ARXIV.2202.13669},
url = {https://arxiv.org/abs/2202.13669},
author = {Wang, Jiapeng and Jin, Lianwen and Ding, Kai},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
clp/llama2-qlora-finetunined-french
|
clp
| 2023-09-11T13:20:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-11T13:20:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
tum-nlp/text2food
|
tum-nlp
| 2023-09-11T13:20:31Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-11T13:03:37Z |
---
license: openrail
---
This model is a necessary LORA weights to generate high quality food images from text. All the details and code can be foundable from [here](https://github.com/yusufani/text2food/tree/main)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.