modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Davlan/mt5_base_eng_yor_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: ALP-GMM_SAC_fish_s44
results:
- metrics:
- type: mean_reward
value: 268.93 +/- 94.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'fish'}
```
|
Declan/CNN_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: GoalGAN_SAC_bipedal_s1
results:
- metrics:
- type: mean_reward
value: 252.41 +/- 126.06
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'GoalGAN'
'morphology': 'old_classic_bipedal'}
```
|
Declan/NPR_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Random_SAC_bipedal_s5
results:
- metrics:
- type: mean_reward
value: 188.35 +/- 145.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Random'
'morphology': 'old_classic_bipedal'}
```
|
Declan/NPR_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: Random_SAC_bipedal_s15
results:
- metrics:
- type: mean_reward
value: 181.30 +/- 140.51
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'Random'
'morphology': 'old_classic_bipedal'}
```
|
Declan/NewYorkTimes_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 194.68 +/- 76.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from huggingface_sb3 import load_from_hub
repo_id = "yogeshkulkarni/ppo-LunarLander-v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
# Evaluate this model
eval_env = gym.make("LunarLander-v2")
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
Declan/NewYorkTimes_model_v3 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v2_wikigold_split
type: article500v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7113220815752461
- name: Recall
type: recall
value: 0.7526041666666666
- name: F1
type: f1
value: 0.7313810556760665
- name: Accuracy
type: accuracy
value: 0.9410548086866598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Precision: 0.7113
- Recall: 0.7526
- F1: 0.7314
- Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 185 | 0.1795 | 0.6982 | 0.7530 | 0.7245 | 0.9412 |
| No log | 2.0 | 370 | 0.2018 | 0.7218 | 0.7537 | 0.7374 | 0.9403 |
| 0.1342 | 3.0 | 555 | 0.2086 | 0.7113 | 0.7526 | 0.7314 | 0.9411 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v3_wikigold_split
type: article500v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7293136626042335
- name: Recall
type: recall
value: 0.7574950033311126
- name: F1
type: f1
value: 0.7431372549019608
- name: Accuracy
type: accuracy
value: 0.9403332402494647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Precision: 0.7293
- Recall: 0.7575
- F1: 0.7431
- Accuracy: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 187 | 0.2080 | 0.6933 | 0.7109 | 0.7020 | 0.9363 |
| No log | 2.0 | 374 | 0.2159 | 0.7244 | 0.7338 | 0.7291 | 0.9379 |
| 0.1349 | 3.0 | 561 | 0.2187 | 0.7293 | 0.7575 | 0.7431 | 0.9403 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Politico_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v5_wikigold_split
type: article500v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7302452316076294
- name: Recall
type: recall
value: 0.7657142857142857
- name: F1
type: f1
value: 0.7475592747559274
- name: Accuracy
type: accuracy
value: 0.9453822040028936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1848
- Precision: 0.7302
- Recall: 0.7657
- F1: 0.7476
- Accuracy: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 172 | 0.1781 | 0.7013 | 0.7396 | 0.7200 | 0.9403 |
| No log | 2.0 | 344 | 0.1904 | 0.7203 | 0.7421 | 0.7310 | 0.9396 |
| 0.1436 | 3.0 | 516 | 0.1848 | 0.7302 | 0.7657 | 0.7476 | 0.9454 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Politico_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v6_wikigold_split
type: article500v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7276069518716578
- name: Recall
type: recall
value: 0.7654711673699015
- name: F1
type: f1
value: 0.7460589444825222
- name: Accuracy
type: accuracy
value: 0.944971237119919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2052
- Precision: 0.7276
- Recall: 0.7655
- F1: 0.7461
- Accuracy: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 209 | 0.1846 | 0.7211 | 0.7472 | 0.7339 | 0.9434 |
| No log | 2.0 | 418 | 0.2111 | 0.7114 | 0.7384 | 0.7246 | 0.9410 |
| 0.1368 | 3.0 | 627 | 0.2052 | 0.7276 | 0.7655 | 0.7461 | 0.9450 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Politico_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentiment-10Epochs-2-work-please
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-10Epochs-2-work-please
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7450
- Accuracy: 0.8549
- F1: 0.8516
- Precision: 0.8714
- Recall: 0.8327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3685 | 1.0 | 7088 | 0.4334 | 0.8590 | 0.8463 | 0.9304 | 0.7762 |
| 0.3721 | 2.0 | 14176 | 0.3822 | 0.8673 | 0.8575 | 0.9257 | 0.7987 |
| 0.3393 | 3.0 | 21264 | 0.4634 | 0.8705 | 0.8619 | 0.9228 | 0.8086 |
| 0.3017 | 4.0 | 28352 | 0.4806 | 0.8708 | 0.8630 | 0.9186 | 0.8137 |
| 0.3072 | 5.0 | 35440 | 0.4509 | 0.87 | 0.8648 | 0.9009 | 0.8314 |
| 0.2833 | 6.0 | 42528 | 0.5339 | 0.8627 | 0.8581 | 0.8879 | 0.8302 |
| 0.2633 | 7.0 | 49616 | 0.5457 | 0.8637 | 0.8614 | 0.8759 | 0.8473 |
| 0.2418 | 8.0 | 56704 | 0.6408 | 0.8589 | 0.8563 | 0.8722 | 0.8410 |
| 0.1999 | 9.0 | 63792 | 0.7404 | 0.8530 | 0.8485 | 0.8752 | 0.8235 |
| 0.1809 | 10.0 | 70880 | 0.7450 | 0.8549 | 0.8516 | 0.8714 | 0.8327 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Declan/Politico_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8615332274892267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1375
- F1: 0.8615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 525 | 0.1795 | 0.8092 |
| No log | 2.0 | 1050 | 0.1360 | 0.8490 |
| No log | 3.0 | 1575 | 0.1375 | 0.8615 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.13.0.dev20220808
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/Reuters_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v7_wikigold_split
type: article500v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7235232436211115
- name: Recall
type: recall
value: 0.7613093048915043
- name: F1
type: f1
value: 0.7419354838709679
- name: Accuracy
type: accuracy
value: 0.9419641450581304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1961
- Precision: 0.7235
- Recall: 0.7613
- F1: 0.7419
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 162 | 0.1924 | 0.6942 | 0.7087 | 0.7014 | 0.9358 |
| No log | 2.0 | 324 | 0.1958 | 0.7165 | 0.7540 | 0.7348 | 0.9403 |
| No log | 3.0 | 486 | 0.1961 | 0.7235 | 0.7613 | 0.7419 | 0.9420 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Reuters_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'spider'}
```
|
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- article500v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Article_500v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: article500v8_wikigold_split
type: article500v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7349189934505344
- name: Recall
type: recall
value: 0.7560283687943262
- name: F1
type: f1
value: 0.7453242440132843
- name: Accuracy
type: accuracy
value: 0.9421215763172877
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_500v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2113
- Precision: 0.7349
- Recall: 0.7560
- F1: 0.7453
- Accuracy: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 191 | 0.1914 | 0.7105 | 0.7181 | 0.7143 | 0.9382 |
| No log | 2.0 | 382 | 0.2045 | 0.7283 | 0.7574 | 0.7426 | 0.9408 |
| 0.1441 | 3.0 | 573 | 0.2113 | 0.7349 | 0.7560 | 0.7453 | 0.9421 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Declan/Reuters_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- bg
- mk
- multilingual
license: cc0-1.0
tags:
- BERTovski
- MaCoCu
---
# Model description
**XLMR-BERTovski** is a large pre-trained language model trained on Bulgarian and Macedonian texts. It was created by continuing training from the [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) model. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen.
XLMR-BERTovski was trained on 74GB of Bulgarian and Macedonian text, which is equal to just over 7 billion tokens. It was trained for 67,500 steps with a batch size of 1,024, which was approximately 2.5 epochs. It uses the same vocabulary as the original XLMR-large model. The model is trained on the same data as [BERTovski](https://huggingface.co/RVN/BERTovski), but this model was trained from scratch using the RoBERTa architecture.
The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels).
# How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("RVN/XLMR-BERTovski")
model = AutoModel.from_pretrained("RVN/XLMR-BERTovski") # PyTorch
model = TFAutoModel.from_pretrained("RVN/XLMR-BERTovski") # Tensorflow
```
# Data
For training, we used all Bulgarian and Macedonian data that was present in the [MaCoCu](https://macocu.eu/), Oscar, mc4 and Wikipedia corpora. In a manual analysis we found that for Oscar and mc4, if the data did not come from the corresponding domain (.bg or .mk), it was often (badly) machine translated. Therefore, we opted to only use data that originally came from a .bg or .mk domain.
After de-duplicating the data, we were left with a total of 54.5 GB of Bulgarian and 9 GB of Macedonian text. Since there was quite a bit more Bulgarian data, we simply doubled the Macedonian data during training.
# Benchmark performance
We tested performance of XLMR-BERTovski on benchmarks of XPOS, UPOS and NER. For Bulgarian, we used the data from the [Universal Dependencies](https://universaldependencies.org/) project. For Macedonian, we used the data sets created in the [babushka-bench](https://github.com/clarinsi/babushka-bench/) project. We also tested on a Google (Bulgarian) and human (Macedonian) translated version of the COPA data set (for details see our [Github repo](https://github.com/RikVN/COPA)). We compare performance to [BERTovski](https://huggingface.co/RVN/BERTovski) and the strong multi-lingual models XLMR-base and XLMR-large. For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for UPOS/XPOS/NER, for COPA we optimized the learning rate on the dev set.
## Bulgarian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 99.2 | 99.4 | 98.0 | 98.3 | 93.2 | 92.9 | 56.9 |
| **XLM-R-large** | 99.3 | 99.4 | 97.4 | 97.7 | 93.7 | 93.5 | 53.1 |
| **BERTovski** | 98.8 | 99.1 | 97.6 | 97.8 | 93.5 | 93.3 | 51.7 |
| **XLMR-BERTovski** | 99.3 | 99.5 | 98.5 | 98.8 | 94.4 | 94.3 | 54.6 |
## Macedonian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 98.3 | 98.6 | 97.3 | 97.1 | 92.8 | 94.8 | 55.3 |
| **XLM-R-large** | 98.3 | 98.7 | 97.7 | 97.5 | 93.3 | 95.1 | 52.5 |
| **BERTovski** | 97.8 | 98.1 | 96.4 | 96.0 | 92.8 | 94.6 | 51.8 |
| **XLMR-BERTovski** | 98.6 | 98.8 | 98.0 | 97.7 | 94.4 | 96.3 | 55.6|
# Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union's Connecting Europe Facility 2014-
2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
# Citation
If you use this model, please cite the following paper:
```bibtex
@inproceedings{non-etal-2022-macocu,
title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages",
author = "Ba{\~n}{\'o}n, Marta and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Garc{\'\i}a-Romero, Cristian and
Kuzman, Taja and
Ljube{\v{s}}i{\'c}, Nikola and
van Noord, Rik and
Sempere, Leopoldo Pla and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Rupnik, Peter and
Suchomel, V{\'\i}t and
Toral, Antonio and
van der Werff, Tobias and
Zaragoza, Jaume",
booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation",
month = jun,
year = "2022",
address = "Ghent, Belgium",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2022.eamt-1.41",
pages = "303--304"
}
``` |
Declan/WallStreetJournal_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'spider'}
```
|
Declan/test_push | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v0_wikigold_split
type: tagged_one50v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.023255813953488372
- name: Recall
type: recall
value: 0.000244081034903588
- name: F1
type: f1
value: 0.00048309178743961357
- name: Accuracy
type: accuracy
value: 0.780812356464178
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6550
- Precision: 0.0233
- Recall: 0.0002
- F1: 0.0005
- Accuracy: 0.7808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.7660 | 0.0 | 0.0 | 0.0 | 0.7803 |
| No log | 2.0 | 26 | 0.6781 | 0.0 | 0.0 | 0.0 | 0.7803 |
| No log | 3.0 | 39 | 0.6550 | 0.0233 | 0.0002 | 0.0005 | 0.7808 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DeepBasak/Slack | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo).
*This policy was not part of TeachMyAgent's benchmark. It was trained on the easy task space of the Parkour environment with water removed.*
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour (easy + no water)'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'ALP-GMM'
'morphology': 'climbing_profile_chimpanzee'}
```
|
DeepESP/gpt2-spanish-medium | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 340 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: mrm8488/Worm_v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DeepESP/gpt2-spanish | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,463 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v5_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v5_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v5_wikigold_split
type: tagged_one50v5_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.11643835616438356
- name: Recall
type: recall
value: 0.008254430687059966
- name: F1
type: f1
value: 0.015416005440943096
- name: Accuracy
type: accuracy
value: 0.7840127288617977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v5_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6440
- Precision: 0.1164
- Recall: 0.0083
- F1: 0.0154
- Accuracy: 0.7840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.7768 |
| No log | 2.0 | 52 | 0.6426 | 0.0855 | 0.0024 | 0.0047 | 0.7799 |
| No log | 3.0 | 78 | 0.6440 | 0.1164 | 0.0083 | 0.0154 | 0.7840 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DeepPavlov/distilrubert-tiny-cased-conversational | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5,993 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one50v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one50v8_wikigold_split
type: tagged_one50v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.09166666666666666
- name: Recall
type: recall
value: 0.0053868756121449556
- name: F1
type: f1
value: 0.010175763182238666
- name: Accuracy
type: accuracy
value: 0.7848874958020822
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5935
- Precision: 0.0917
- Recall: 0.0054
- F1: 0.0102
- Accuracy: 0.7849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 19 | 0.7198 | 0.0 | 0.0 | 0.0 | 0.7786 |
| No log | 2.0 | 38 | 0.6263 | 0.0727 | 0.0010 | 0.0019 | 0.7798 |
| No log | 3.0 | 57 | 0.5935 | 0.0917 | 0.0054 | 0.0102 | 0.7849 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DeepPavlov/xlm-roberta-large-en-ru-mnli | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"transformers",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli",
"has_space"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 227 | null | ---
license: apache-2.0
---
# OFA-Base-VQA
This is the official checkpoint (adaptive to the official code instead of Huggingface Transformers) of OFA-Base finetuned on VQA 2.0.
For more information, please refer to the official github ([https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA))
Temporarily, we only provide the finetuned checkpoints based on the official code. |
DeepPavlov/xlm-roberta-large-en-ru | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"transformers"
] | feature-extraction | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 190 | 2022-08-11T14:08:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v1_wikigold_split
type: tagged_one100v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.23249893932965635
- name: Recall
type: recall
value: 0.14241164241164242
- name: F1
type: f1
value: 0.17663174858984693
- name: Accuracy
type: accuracy
value: 0.8347454643603164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4613
- Precision: 0.2325
- Recall: 0.1424
- F1: 0.1766
- Accuracy: 0.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.5179 | 0.1311 | 0.0398 | 0.0610 | 0.8044 |
| No log | 2.0 | 78 | 0.4609 | 0.2297 | 0.1351 | 0.1702 | 0.8327 |
| No log | 3.0 | 117 | 0.4613 | 0.2325 | 0.1424 | 0.1766 | 0.8347 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DeltaHub/adapter_t5-3b_cola | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v2_wikigold_split
type: tagged_one100v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.29022988505747127
- name: Recall
type: recall
value: 0.12856415478615071
- name: F1
type: f1
value: 0.17819336626676077
- name: Accuracy
type: accuracy
value: 0.833149450650485
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4407
- Precision: 0.2902
- Recall: 0.1286
- F1: 0.1782
- Accuracy: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 40 | 0.5318 | 0.2817 | 0.0204 | 0.0380 | 0.7978 |
| No log | 2.0 | 80 | 0.4431 | 0.2932 | 0.1146 | 0.1647 | 0.8291 |
| No log | 3.0 | 120 | 0.4407 | 0.2902 | 0.1286 | 0.1782 | 0.8331 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v4_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v4_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v4_wikigold_split
type: tagged_one100v4_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.16494312306101344
- name: Recall
type: recall
value: 0.08177390412714688
- name: F1
type: f1
value: 0.10934018851756641
- name: Accuracy
type: accuracy
value: 0.8299042951592769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v4_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v4_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Precision: 0.1649
- Recall: 0.0818
- F1: 0.1093
- Accuracy: 0.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 34 | 0.5649 | 0.0 | 0.0 | 0.0 | 0.7875 |
| No log | 2.0 | 68 | 0.4687 | 0.1197 | 0.0400 | 0.0600 | 0.8147 |
| No log | 3.0 | 102 | 0.4506 | 0.1649 | 0.0818 | 0.1093 | 0.8299 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Deniskin/essays_small_2000i | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one100v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_100v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one100v8_wikigold_split
type: tagged_one100v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.18848653667595172
- name: Recall
type: recall
value: 0.0498159509202454
- name: F1
type: f1
value: 0.07880434782608696
- name: Accuracy
type: accuracy
value: 0.8035317050796927
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_100v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5649
- Precision: 0.1885
- Recall: 0.0498
- F1: 0.0788
- Accuracy: 0.8035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 37 | 0.7042 | 0.0 | 0.0 | 0.0 | 0.7750 |
| No log | 2.0 | 74 | 0.5744 | 0.1628 | 0.0243 | 0.0423 | 0.7930 |
| No log | 3.0 | 111 | 0.5649 | 0.1885 | 0.0498 | 0.0788 | 0.8035 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Dev-DGT/food-dbert-multiling | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v3_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v3_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v3_wikigold_split
type: tagged_one250v3_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5783339046966061
- name: Recall
type: recall
value: 0.4806267806267806
- name: F1
type: f1
value: 0.5249727711218297
- name: Accuracy
type: accuracy
value: 0.8981560947699669
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3179
- Precision: 0.5783
- Recall: 0.4806
- F1: 0.5250
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 81 | 0.3974 | 0.2778 | 0.1869 | 0.2235 | 0.8530 |
| No log | 2.0 | 162 | 0.3095 | 0.5594 | 0.4470 | 0.4969 | 0.8944 |
| No log | 3.0 | 243 | 0.3179 | 0.5783 | 0.4806 | 0.5250 | 0.8982 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Dhito/am | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
widget:
- text: "Excited for the mint"
- text: "lfg"
- text: "no wl"
---
# Discord Sentiment Analysis - (Context: NFTs)
This is a model derived from Twitter-roBERTa-base model trained on ~10K Discord messages from NFT-based Discord servers and finetuned for sentiment analysis with manually labelled data.
The original Twitter-roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). This model is suitable for English.
- Git Repo: [BVK project repository](https://github.com/BVK23/Discord-NLP).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive |
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v7_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v7_wikigold_split
type: tagged_one250v7_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5509259259259259
- name: Recall
type: recall
value: 0.4675834970530452
- name: F1
type: f1
value: 0.5058448459086079
- name: Accuracy
type: accuracy
value: 0.8893517705222476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3809
- Precision: 0.5509
- Recall: 0.4676
- F1: 0.5058
- Accuracy: 0.8894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 87 | 0.4450 | 0.1912 | 0.1047 | 0.1353 | 0.8278 |
| No log | 2.0 | 174 | 0.3903 | 0.4992 | 0.4176 | 0.4548 | 0.8820 |
| No log | 3.0 | 261 | 0.3809 | 0.5509 | 0.4676 | 0.5058 | 0.8894 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Digakive/Hsgshs | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one250v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one250v8_wikigold_split
type: tagged_one250v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5351851851851852
- name: Recall
type: recall
value: 0.4795353982300885
- name: F1
type: f1
value: 0.5058343057176197
- name: Accuracy
type: accuracy
value: 0.8947195053970506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3389
- Precision: 0.5352
- Recall: 0.4795
- F1: 0.5058
- Accuracy: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.4305 | 0.3497 | 0.1814 | 0.2389 | 0.8488 |
| No log | 2.0 | 190 | 0.3469 | 0.4995 | 0.4281 | 0.4611 | 0.8875 |
| No log | 3.0 | 285 | 0.3389 | 0.5352 | 0.4795 | 0.5058 | 0.8947 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Dimedrolza/DialoGPT-small-cyberpunk | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ccsobral/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ccsobral/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1454
- Validation Loss: 0.5483
- Train Matthews Correlation: 0.5419
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.1417 | 0.5483 | 0.5419 | 0 |
| 0.1466 | 0.5483 | 0.5419 | 1 |
| 0.1454 | 0.5483 | 0.5419 | 2 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DivyanshuSheth/T5-Seq2Seq-Final | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_act 0.0 --alpha_clm 0.0 --mlm \ |
Dongjae/mrc2reader | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-gc-art2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-gc-art2e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0863
- Accuracy: 0.982
- F1: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0875 | 1.0 | 32 | 0.0874 | 0.982 | 0.9731 |
| 0.0711 | 2.0 | 64 | 0.0863 | 0.982 | 0.9731 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Waynehillsdev/waynehills_sentimental_kor | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_one500v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_One_500v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_one500v9_wikigold_split
type: tagged_one500v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.7016183412002697
- name: Recall
type: recall
value: 0.7011455525606469
- name: F1
type: f1
value: 0.7013818672059319
- name: Accuracy
type: accuracy
value: 0.9284582154955403
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Precision: 0.7016
- Recall: 0.7011
- F1: 0.7014
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 170 | 0.2908 | 0.5414 | 0.4538 | 0.4938 | 0.9011 |
| No log | 2.0 | 340 | 0.2680 | 0.6629 | 0.6253 | 0.6436 | 0.9172 |
| 0.1121 | 3.0 | 510 | 0.2469 | 0.7016 | 0.7011 | 0.7014 | 0.9285 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Doohae/q_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v0_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v0_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v0_wikigold_split
type: tagged_uni50v0_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.10632183908045977
- name: Recall
type: recall
value: 0.009030998291432755
- name: F1
type: f1
value: 0.016647919010123732
- name: Accuracy
type: accuracy
value: 0.7870040978069978
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6180
- Precision: 0.1063
- Recall: 0.0090
- F1: 0.0166
- Accuracy: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.7325 | 0.0 | 0.0 | 0.0 | 0.7803 |
| No log | 2.0 | 28 | 0.6458 | 0.0860 | 0.0039 | 0.0075 | 0.7838 |
| No log | 3.0 | 42 | 0.6180 | 0.1063 | 0.0090 | 0.0166 | 0.7870 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-gc-art1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-gc-art1e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Accuracy: 0.982
- F1: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0226 | 1.0 | 32 | 0.0928 | 0.982 | 0.9763 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Doquey/DialoGPT-small-Michaelbot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5303243504311796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8278
- Matthews Correlation: 0.5303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5225 | 1.0 | 535 | 0.5299 | 0.3973 |
| 0.3485 | 2.0 | 1070 | 0.5279 | 0.4975 |
| 0.2375 | 3.0 | 1605 | 0.5637 | 0.5275 |
| 0.1832 | 4.0 | 2140 | 0.7995 | 0.5249 |
| 0.1301 | 5.0 | 2675 | 0.8278 | 0.5303 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v6_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v6_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v6_wikigold_split
type: tagged_uni50v6_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.7775983130313839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v6_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v6_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6142
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 17 | 0.7369 | 0.0 | 0.0 | 0.0 | 0.7773 |
| No log | 2.0 | 34 | 0.6359 | 0.0 | 0.0 | 0.0 | 0.7773 |
| No log | 3.0 | 51 | 0.6142 | 0.0 | 0.0 | 0.0 | 0.7776 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
DoyyingFace/bert-asian-hate-tweets-concat-clean | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v8_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v8_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v8_wikigold_split
type: tagged_uni50v8_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.15460526315789475
- name: Recall
type: recall
value: 0.023016650342801176
- name: F1
type: f1
value: 0.04006820119352089
- name: Accuracy
type: accuracy
value: 0.7925892757192432
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5527
- Precision: 0.1546
- Recall: 0.0230
- F1: 0.0401
- Accuracy: 0.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 19 | 0.6981 | 0.0 | 0.0 | 0.0 | 0.7786 |
| No log | 2.0 | 38 | 0.5851 | 0.1290 | 0.0049 | 0.0094 | 0.7832 |
| No log | 3.0 | 57 | 0.5527 | 0.1546 | 0.0230 | 0.0401 | 0.7926 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-08-11T17:47:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni50v9_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni50v9_wikigold_split
type: tagged_uni50v9_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.5
- name: Recall
type: recall
value: 0.000243605359317905
- name: F1
type: f1
value: 0.00048697345994643296
- name: Accuracy
type: accuracy
value: 0.7843220814175171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_50v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6233
- Precision: 0.5
- Recall: 0.0002
- F1: 0.0005
- Accuracy: 0.7843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.7531 | 0.0 | 0.0 | 0.0 | 0.7788 |
| No log | 2.0 | 32 | 0.6599 | 0.5 | 0.0002 | 0.0005 | 0.7823 |
| No log | 3.0 | 48 | 0.6233 | 0.5 | 0.0002 | 0.0005 | 0.7843 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2022-08-11T17:59:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v1_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v1_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v1_wikigold_split
type: tagged_uni100v1_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.23641213737912636
- name: Recall
type: recall
value: 0.18425155925155925
- name: F1
type: f1
value: 0.20709799912370383
- name: Accuracy
type: accuracy
value: 0.8493674748280798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v1_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v1_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4031
- Precision: 0.2364
- Recall: 0.1843
- F1: 0.2071
- Accuracy: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.4906 | 0.1526 | 0.0580 | 0.0840 | 0.8187 |
| No log | 2.0 | 78 | 0.4213 | 0.2321 | 0.1736 | 0.1986 | 0.8456 |
| No log | 3.0 | 117 | 0.4031 | 0.2364 | 0.1843 | 0.2071 | 0.8494 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tagged_uni100v2_wikigold_split
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tagged_uni100v2_wikigold_split
type: tagged_uni100v2_wikigold_split
args: default
metrics:
- name: Precision
type: precision
value: 0.2783229259589652
- name: Recall
type: recall
value: 0.15885947046843177
- name: F1
type: f1
value: 0.20226904376012964
- name: Accuracy
type: accuracy
value: 0.8411943180251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4048
- Precision: 0.2783
- Recall: 0.1589
- F1: 0.2023
- Accuracy: 0.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 39 | 0.4802 | 0.3667 | 0.0784 | 0.1292 | 0.8125 |
| No log | 2.0 | 78 | 0.4028 | 0.2745 | 0.1540 | 0.1973 | 0.8412 |
| No log | 3.0 | 117 | 0.4048 | 0.2783 | 0.1589 | 0.2023 | 0.8412 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
t5-11b | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"transformers",
"summarization",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | translation | {
"architectures": [
"T5WithLMHeadModel"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 37,600 | 2022-08-11T18:44:40Z | ---
inference: false
co2_eq_emissions:
emissions: 7540
source: MLCo2 Machine Learning Impact calculator
geographical_location: East USA
hardware_used: Tesla V100-SXM2 GPU
tags:
- segmentation
license: gpl-3.0
language: en
model-index:
- name: SpecLab
results: []
---
# SpecLab Model Card
This model card focuses on the model associated with the SpecLab space on Hugging Face. Temporarily, please [contact me](https://haoliyin.me) for the demo.
## Model Details
* **Developed by:** Haoli Yin
* **Model type:** Atrous Spatial Pyramid Pooling (ASPP) model for Specular Reflection Segmentation in Endoscopic Images
* **Language(s):** English
* **License:** GPL 3.0
* **Model Description:** This is a model that can be used to create dense pixel-wise segmentation masks of detected specular reflections from an endoscopy image.
* **Cite as:**
```bib text
@misc{Yin_SpecLab_2022,
author = {Yin, Haoli},
doi = {TBD},
month = {8},
title = {SpecLab},
url = {https://github.com/Nano1337/SpecLab},
year = {2022}
}
```
## Uses
### Direct Use
The model is intended to be used to generate dense pixel-wise segmentation maps of specular reflection regions found in endoscopy images. Intended uses exclude those described in the [Misuse and Out-of-Scope Use](#misuse-malicious-use-and-out-of-scope-use) section.
### Downstream Use
The model could also be used for downstream use cases, including further research efforts, such as detecting specular reflection in other real-world scenarios. This application would require fine-tuning the model with domain-specific datasets.
## Limitations and Bias
### Limitations
The performance of the model may degrade when applied on non-biological tissue images. There may also be edge cases causing the model to fail to detect specular reflection, especially if the specular reflection present is a different color than white.
### Bias
The model is trained on endoscopy video data, so it has a bias towards detecting specular reflection better on biological tissue backgrounds.
### Limitations and Bias Recommendations
* Users (both direct and downstream) should be made aware of the biases and limitations.
* Further work on this model should include methods for balanced representations of different types of specular reflections.
## Training
### Training Data
The GLENDA "no pathology" dataset was used to train the model:
* [GLENDA Dataset](http://ftp.itec.aau.at/datasets/GLENDA/), which contains ~12k image frames.
* Masks (to be released), were generated using the specular reflection detection pipeline found in this paper (to be released).
* Train/Val/Test was split randomly based on a 60/20/20 distribution.
### Training and Evaluation Procedure & Results
You can view the training logs [here at Weights and Biases](https://wandb.ai/nano-1337/Predict/reports/SpecLab-Training-for-10-Epochs--VmlldzoyNDYyNDIz?accessToken=xfjtfgb5szvsk08luvmwinjl6y2kvp1vl1eax52kbxgwgbwjqv29yed9elzgbju1)
During training, input images pass through the system as follows:
* Images are transformed by albumentations with horizontal/vertical flips to augment the data, normalized to [0, 1], and converted to a tensor.
* A forward pass is run through the model and the logits are output
* Loss is the "Binary Cross Entropy with Logits Loss" between the model prediction logits and the ground truth masks
* The logits are run through a sigmoid activation function and a threshold at 0.5 is set to binarize the output.
The simplified training procedure for SpecLab is as follows:
* **Hardware:** One 16GB NVIDIA Tesla V100-SXM2
* **Optimizer:** Adam
* **Batch:** 4 samples
* **Learning rate:** initialized at 0.001 then CosineAnnealingLR with a T_max of 20.
* **Epochs:** 10 epochs
* **Steps:** 18k
## Environmental Impact
### SpecLab Estimated Emissions
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
* **Hardware Type:** Tesla V100-SXM2
* **Hours used:** 6
* **Cloud Provider:** Google Colab
* **Compute Region:** us-south1
* **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 0.7146 kg CO2 eq.
## Citation
```bibtext
@misc{Yin_SpecLab_2022,
author = {Yin, Haoli},
doi = {TBD},
month = {8},
title = {SpecLab},
url = {https://github.com/Nano1337/SpecLab},
year = {2022}
}
```
*This model card was written by: Haoli Yin* |
1Basco/DialoGPT-small-jake | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-08-11T19:58:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: story_spanish_gpt2_by_category
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# story_spanish_gpt2_by_category
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Abobus/Fu | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="marii/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AdapterHub/bert-base-uncased-pf-rotten_tomatoes | [
"bert",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:sentiment/rotten_tomatoes"
] | text-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9811111111111112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
- Accuracy: 0.9811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2623 | 1.0 | 379 | 0.1006 | 0.9674 |
| 0.1712 | 2.0 | 758 | 0.0620 | 0.9804 |
| 0.1206 | 3.0 | 1137 | 0.0483 | 0.9811 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.8.1+cu101
- Datasets 2.4.1.dev0
- Tokenizers 0.12.0
|
AdapterHub/bert-base-uncased-pf-rte | [
"bert",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:nli/rte"
] | text-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.936
- name: F1
type: f1
value: 0.9362059325922715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.936
- F1: 0.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1111 | 1.0 | 250 | 0.1854 | 0.9295 | 0.9295 |
| 0.1079 | 2.0 | 500 | 0.1582 | 0.936 | 0.9362 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AdapterHub/roberta-base-pf-cola | [
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:lingaccept/cola"
] | text-classification | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- bg
- fi
- hr
- sl
- sr
language_bcp47:
- sr_Cyrl
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-fi-zls
results:
- task:
name: Translation fin-bul
type: translation
args: fin-bul
dataset:
name: flores101-devtest
type: flores_101
args: fin bul devtest
metrics:
- name: BLEU
type: bleu
value: 26.2
- name: chr-F
type: chrf
value: 0.54912
- task:
name: Translation fin-hrv
type: translation
args: fin-hrv
dataset:
name: flores101-devtest
type: flores_101
args: fin hrv devtest
metrics:
- name: BLEU
type: bleu
value: 21.3
- name: chr-F
type: chrf
value: 0.51468
- task:
name: Translation fin-slv
type: translation
args: fin-slv
dataset:
name: flores101-devtest
type: flores_101
args: fin slv devtest
metrics:
- name: BLEU
type: bleu
value: 22.3
- name: chr-F
type: chrf
value: 0.51226
- task:
name: Translation fin-srp_Cyrl
type: translation
args: fin-srp_Cyrl
dataset:
name: flores101-devtest
type: flores_101
args: fin srp_Cyrl devtest
metrics:
- name: BLEU
type: bleu
value: 21.8
- name: chr-F
type: chrf
value: 0.50774
---
# opus-mt-tc-big-fi-zls
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Finnish (fi) to South Slavic languages (zls).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2022-07-23
- **License:** CC-BY-4.0
- **Language(s):**
- Source Language(s): fin
- Target Language(s): bul hrv slv srp_Cyrl
- Language Pair(s): fin-bul fin-hrv fin-slv fin-srp_Cyrl
- Valid Target Language Labels: >>bos<< >>bos_Cyrl<< >>bos_Latn<< >>bul<< >>chu<< >>hbs<< >>hbs_Cyrl<< >>hrv<< >>kjv<< >>mkd<< >>slv<< >>srp<< >>srp_Cyrl<< >>srp_Latn<< >>svm<<
- **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip)
- **Resources for more information:**
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- More information about released models for this language pair: [OPUS-MT fin-zls README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-zls/README.md)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>slv<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>bul<< Ajattelen vain sinua.",
">>slv<< Virtahevot rakastavat vettä."
]
model_name = "pytorch-models/opus-mt-tc-big-fi-zls"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Мисля само за теб.
# Povodni konji obožujejo vodo.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fi-zls")
print(pipe(">>bul<< Ajattelen vain sinua."))
# expected output: Мисля само за теб.
```
## Training
- **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fin-bul | flores101-devtest | 0.54912 | 26.2 | 1012 | 24700 |
| fin-hrv | flores101-devtest | 0.51468 | 21.3 | 1012 | 22423 |
| fin-slv | flores101-devtest | 0.51226 | 22.3 | 1012 | 23425 |
| fin-srp_Cyrl | flores101-devtest | 0.50774 | 21.8 | 1012 | 23456 |
## Citation Information
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 8b9f0b0
* port time: Sat Aug 13 00:08:29 EEST 2022
* port machine: LM0-400-22516.local
|
AlErysvi/Erys | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: mojtaba767/bert-base-parsbert-uncased-finetuned-imdb-m2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mojtaba767/bert-base-parsbert-uncased-finetuned-imdb-m2
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.4126
- Validation Loss: 3.4258
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -968, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4126 | 3.4258 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Aleksandar1932/gpt2-soul | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-08-13T08:15:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285214883845085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8572 | 1.0 | 250 | 0.3210 | 0.9045 | 0.9015 |
| 0.2513 | 2.0 | 500 | 0.2168 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AnonymousSub/EManuals_BERT_copy_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- Wer: 0.4880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2135 | 4.21 | 400 | 2.5232 | 1.0 |
| 0.8323 | 8.42 | 800 | 0.4673 | 0.6142 |
| 0.3247 | 12.63 | 1200 | 0.4087 | 0.5536 |
| 0.217 | 16.84 | 1600 | 0.3950 | 0.5237 |
| 0.166 | 21.05 | 2000 | 0.4294 | 0.5075 |
| 0.141 | 25.26 | 2400 | 0.4219 | 0.4944 |
| 0.1193 | 29.47 | 2800 | 0.4253 | 0.4880 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/EManuals_RoBERTa_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-08-13T18:54:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertbasecasedfinancialphrasebank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertbasecasedfinancialphrasebank
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5046
- Accuracy: 0.8660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9921 | 0.04 | 5 | 0.9266 | 0.6082 |
| 0.8989 | 0.08 | 10 | 0.8833 | 0.6082 |
| 0.8563 | 0.12 | 15 | 0.8287 | 0.6371 |
| 0.8316 | 0.16 | 20 | 0.7626 | 0.6866 |
| 0.808 | 0.2 | 25 | 0.7398 | 0.6845 |
| 0.8219 | 0.25 | 30 | 0.7234 | 0.6969 |
| 0.7402 | 0.29 | 35 | 0.7080 | 0.6990 |
| 0.749 | 0.33 | 40 | 0.7261 | 0.6948 |
| 0.8777 | 0.37 | 45 | 0.7253 | 0.7072 |
| 0.7057 | 0.41 | 50 | 0.6736 | 0.7196 |
| 0.6968 | 0.45 | 55 | 0.6326 | 0.7320 |
| 0.6264 | 0.49 | 60 | 0.6121 | 0.7443 |
| 0.703 | 0.53 | 65 | 0.6008 | 0.7505 |
| 0.6788 | 0.57 | 70 | 0.6421 | 0.7278 |
| 0.6458 | 0.61 | 75 | 0.5801 | 0.7546 |
| 0.605 | 0.66 | 80 | 0.5601 | 0.7588 |
| 0.5428 | 0.7 | 85 | 0.5480 | 0.7546 |
| 0.6025 | 0.74 | 90 | 0.5632 | 0.7402 |
| 0.5761 | 0.78 | 95 | 0.5053 | 0.7794 |
| 0.4919 | 0.82 | 100 | 0.4745 | 0.8289 |
| 0.5217 | 0.86 | 105 | 0.4731 | 0.8268 |
| 0.5517 | 0.9 | 110 | 0.4512 | 0.8474 |
| 0.4131 | 0.94 | 115 | 0.4280 | 0.8392 |
| 0.5244 | 0.98 | 120 | 0.4222 | 0.8433 |
| 0.6407 | 1.02 | 125 | 0.4474 | 0.8392 |
| 0.3661 | 1.07 | 130 | 0.4101 | 0.8474 |
| 0.4148 | 1.11 | 135 | 0.3810 | 0.8619 |
| 0.296 | 1.15 | 140 | 0.4003 | 0.8371 |
| 0.3579 | 1.19 | 145 | 0.3738 | 0.8598 |
| 0.3718 | 1.23 | 150 | 0.3635 | 0.8598 |
| 0.307 | 1.27 | 155 | 0.3900 | 0.8515 |
| 0.3055 | 1.31 | 160 | 0.3881 | 0.8454 |
| 0.2789 | 1.35 | 165 | 0.3679 | 0.8495 |
| 0.2541 | 1.39 | 170 | 0.3709 | 0.8557 |
| 0.3263 | 1.43 | 175 | 0.3815 | 0.8557 |
| 0.305 | 1.48 | 180 | 0.3757 | 0.8495 |
| 0.2215 | 1.52 | 185 | 0.3643 | 0.8639 |
| 0.3642 | 1.56 | 190 | 0.3887 | 0.8495 |
| 0.4179 | 1.6 | 195 | 0.3736 | 0.8536 |
| 0.3126 | 1.64 | 200 | 0.3928 | 0.8392 |
| 0.2625 | 1.68 | 205 | 0.3648 | 0.8577 |
| 0.2155 | 1.72 | 210 | 0.3612 | 0.8577 |
| 0.2526 | 1.76 | 215 | 0.3781 | 0.8433 |
| 0.3033 | 1.8 | 220 | 0.4334 | 0.8412 |
| 0.2752 | 1.84 | 225 | 0.4039 | 0.8392 |
| 0.3466 | 1.89 | 230 | 0.3800 | 0.8412 |
| 0.4447 | 1.93 | 235 | 0.3575 | 0.8412 |
| 0.257 | 1.97 | 240 | 0.3732 | 0.8515 |
| 0.2647 | 2.01 | 245 | 0.3542 | 0.8454 |
| 0.1795 | 2.05 | 250 | 0.3599 | 0.8495 |
| 0.1697 | 2.09 | 255 | 0.3694 | 0.8515 |
| 0.1907 | 2.13 | 260 | 0.3985 | 0.8495 |
| 0.2515 | 2.17 | 265 | 0.3753 | 0.8660 |
| 0.1921 | 2.21 | 270 | 0.3674 | 0.8598 |
| 0.1342 | 2.25 | 275 | 0.3712 | 0.8577 |
| 0.1424 | 2.3 | 280 | 0.3595 | 0.8680 |
| 0.2071 | 2.34 | 285 | 0.3650 | 0.8680 |
| 0.2224 | 2.38 | 290 | 0.3881 | 0.8557 |
| 0.2556 | 2.42 | 295 | 0.3598 | 0.8536 |
| 0.1518 | 2.46 | 300 | 0.3506 | 0.8619 |
| 0.1427 | 2.5 | 305 | 0.3621 | 0.8639 |
| 0.1599 | 2.54 | 310 | 0.4495 | 0.8371 |
| 0.163 | 2.58 | 315 | 0.4812 | 0.8124 |
| 0.1462 | 2.62 | 320 | 0.4012 | 0.8680 |
| 0.2213 | 2.66 | 325 | 0.4047 | 0.8557 |
| 0.1967 | 2.7 | 330 | 0.4008 | 0.8701 |
| 0.2305 | 2.75 | 335 | 0.3838 | 0.8722 |
| 0.1097 | 2.79 | 340 | 0.3743 | 0.8577 |
| 0.1521 | 2.83 | 345 | 0.3700 | 0.8701 |
| 0.0833 | 2.87 | 350 | 0.3831 | 0.8680 |
| 0.2099 | 2.91 | 355 | 0.3947 | 0.8722 |
| 0.2086 | 2.95 | 360 | 0.4184 | 0.8515 |
| 0.1294 | 2.99 | 365 | 0.4214 | 0.8536 |
| 0.0992 | 3.03 | 370 | 0.4087 | 0.8639 |
| 0.0973 | 3.07 | 375 | 0.4073 | 0.8557 |
| 0.0519 | 3.11 | 380 | 0.4085 | 0.8639 |
| 0.1418 | 3.16 | 385 | 0.4109 | 0.8639 |
| 0.081 | 3.2 | 390 | 0.4426 | 0.8619 |
| 0.0506 | 3.24 | 395 | 0.4733 | 0.8454 |
| 0.0962 | 3.28 | 400 | 0.5100 | 0.8412 |
| 0.1045 | 3.32 | 405 | 0.4408 | 0.8639 |
| 0.1236 | 3.36 | 410 | 0.4395 | 0.8619 |
| 0.0962 | 3.4 | 415 | 0.4515 | 0.8557 |
| 0.1611 | 3.44 | 420 | 0.4819 | 0.8474 |
| 0.1044 | 3.48 | 425 | 0.4336 | 0.8598 |
| 0.0715 | 3.52 | 430 | 0.4574 | 0.8639 |
| 0.0302 | 3.57 | 435 | 0.4617 | 0.8619 |
| 0.1234 | 3.61 | 440 | 0.4710 | 0.8577 |
| 0.1439 | 3.65 | 445 | 0.5082 | 0.8433 |
| 0.1113 | 3.69 | 450 | 0.4596 | 0.8639 |
| 0.1336 | 3.73 | 455 | 0.4631 | 0.8598 |
| 0.072 | 3.77 | 460 | 0.4372 | 0.8660 |
| 0.0621 | 3.81 | 465 | 0.4381 | 0.8619 |
| 0.1703 | 3.85 | 470 | 0.4337 | 0.8680 |
| 0.0284 | 3.89 | 475 | 0.4321 | 0.8742 |
| 0.0647 | 3.93 | 480 | 0.4575 | 0.8680 |
| 0.0897 | 3.98 | 485 | 0.4676 | 0.8680 |
| 0.0646 | 4.02 | 490 | 0.4550 | 0.8722 |
| 0.0554 | 4.06 | 495 | 0.4721 | 0.8598 |
| 0.0259 | 4.1 | 500 | 0.5045 | 0.8454 |
| 0.0186 | 4.14 | 505 | 0.4635 | 0.8577 |
| 0.0391 | 4.18 | 510 | 0.5362 | 0.8639 |
| 0.0505 | 4.22 | 515 | 0.5122 | 0.8722 |
| 0.1101 | 4.26 | 520 | 0.5046 | 0.8660 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AnonymousSub/SR_EManuals-RoBERTa | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 6.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ntinosmg/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-08-13T19:52:13Z | ---
language: en
thumbnail: http://www.huggingtweets.com/markythefluffy/1660420439548/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535398340963094529/8lG2OKb6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Marky</div>
<div style="text-align: center; font-size: 14px;">@markythefluffy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Marky.
| Data | Marky |
| --- | --- |
| Tweets downloaded | 2049 |
| Retweets | 1275 |
| Short tweets | 201 |
| Tweets kept | 573 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n9qnbbn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markythefluffy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/242nz2s1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/242nz2s1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/markythefluffy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln65Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln65Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: chrome extensions [MASK] accomplish everyday tasks.
infill: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
***
original: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill: at a time when nintendo has become inflexible, ( firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
***
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
``` |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: other
tags:
- vision
- semantic-segmentation
- generated_from_trainer
datasets:
- ds_tag1
- ds_tag2
model-index:
- name: segformer-b4-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6675
- Mean Iou: 0.4470
- Mean Accuracy: 0.5318
- Overall Accuracy: 0.8813
- Per Category Iou: [nan, 0.8533604816441539, 0.8708564376734238, 0.7156066545682345, 0.7983500150170627, 0.41321512443014663, nan, 0.5985238978790051, 0.594578879531733, 0.2628567068518236, 0.8786538788390237, 0.31440437475402366, 0.03608445297504798, nan, 0.09407417012448133, 0.6379131555367624, 0.0, 0.030675647939096523, 0.7682604802535551, 0.18266911680257264, 0.4825862889842458, 0.4408940826749661, 0.33210263983754845, nan, 0.09290563675138425, 0.4449054225103382, 0.47077278372077824, 0.21916411824668705, 0.8724052658393265, 0.7617232855126097, 0.9444534550949257, 0.025704847176713768, 0.3993842932680365, 0.3205363901805991, 0.0]
- Per Category Accuracy: [nan, 0.9380875578102342, 0.9543011168303082, 0.8679443695951874, 0.8440564280853614, 0.5113012519627518, nan, 0.6873750638506678, 0.8562476885610814, 0.43591041376692397, 0.9688079889567591, 0.35346513902473103, 0.03608445297504798, nan, 0.0947777523759757, 0.7549116557254, 0.0, 0.030763473053892217, 0.9130259245472051, 0.21554735732570396, 0.6246260465528358, 0.5660428706356957, 0.4763039449298385, nan, 0.12561338535811123, 0.5679549548577777, 0.6743017631455765, 0.2707808564231738, 0.93120742667048, 0.8943661383441811, 0.9793189796995165, 0.028724445966322443, 0.4894588629271634, 0.39592709085000083, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.6508 | 0.5 | 100 | 0.8381 | 0.2400 | 0.2994 | 0.7799 | [nan, 0.6455149842561542, 0.7660084759624965, 0.0, 0.5583802751726342, 0.17962756504720537, nan, 0.20153013380586832, 0.41997993839489406, 0.0, 0.7944884758623925, 0.0, 0.0, nan, 0.0, 0.011634071805991577, 0.0, 0.0, 0.6624612120527147, 0.0, 0.3605999648666025, 0.15748396186821753, 0.0, nan, 0.0, 0.2806557009887555, 0.0, 0.0, 0.7648512914435088, 0.6239658880405557, 0.8786827007493008, 0.0, 0.0, 0.1338073538654934, 0.0] | [nan, 0.730500979507233, 0.9476581938486851, 0.0, 0.5983776487908833, 0.31278692562395216, nan, 0.2506726944378527, 0.6602795358281754, 0.0, 0.9196788771888567, 0.0, 0.0, nan, 0.0, 0.011634279723112482, 0.0, 0.0, 0.8162061739118446, 0.0, 0.6001624279533586, 0.16819600749195576, 0.0, nan, 0.0, 0.3676572427109064, 0.0, 0.0, 0.8884311645011658, 0.8935431584048393, 0.9506532130007034, 0.0, 0.0, 0.1641736577412265, 0.0] |
| 0.6896 | 1.0 | 200 | 0.6606 | 0.2870 | 0.3440 | 0.8186 | [nan, 0.7331723817963566, 0.7979002134860629, 0.27578896522112495, 0.6355166056258273, 0.2441103299841941, nan, 0.3312595953570695, 0.46053618447299133, 0.0, 0.7953959454340308, 0.0, 0.0, nan, 0.0, 0.4712813380000684, 0.0, 0.0, 0.6714241485031864, 0.0, 0.28944013580253175, 0.2600270046106534, 0.0, nan, 0.0, 0.2899246788080452, 0.0, 0.0, 0.8284648004583407, 0.7080343509392041, 0.9063412687288718, 0.0, 0.0005201165928028867, 0.1972430989674791, 0.0] | [nan, 0.8237975092037824, 0.9443453283653981, 0.32200150394786314, 0.6907694083535335, 0.34945070938958067, nan, 0.40875206634838535, 0.8055181461038734, 0.0, 0.9567077031994288, 0.0, 0.0, nan, 0.0, 0.4925059273468123, 0.0, 0.0, 0.9219048866461697, 0.0, 0.3370399720427038, 0.27653641122512684, 0.0, nan, 0.0, 0.3586482142664524, 0.0, 0.0, 0.932991559490231, 0.8531869700877195, 0.9576580018404326, 0.0, 0.0005218922944777272, 0.2320260987876017, 0.0] |
| 0.5295 | 1.5 | 300 | 0.5953 | 0.3099 | 0.3828 | 0.8254 | [nan, 0.688127943821495, 0.8337533964365879, 0.3816687261591361, 0.6122309245181393, 0.275169272436292, nan, 0.38458181726058716, 0.5099270320719498, 0.0, 0.8218492292121656, 0.0, 0.0, nan, 0.0, 0.5296480642734824, 0.0, 0.0, 0.7004487118560727, 0.0, 0.32344315887311303, 0.37881456320379064, 0.0, nan, 0.0, 0.3269956579248066, 0.0, 0.0, 0.8295381179902048, 0.713331317957199, 0.9093917860629345, 0.0, 0.10250261771735047, 0.28430684363909287, 0.0] | [nan, 0.7915987861143731, 0.9348534855288303, 0.8486025817771651, 0.7142925007719628, 0.4003952093461972, nan, 0.4884602235477241, 0.7758410148699663, 0.0, 0.9535034975570381, 0.0, 0.0, nan, 0.0, 0.5914360263543541, 0.0, 0.0, 0.9178468516910011, 0.0, 0.35763977417592424, 0.42490274865128785, 0.0, nan, 0.0, 0.4074643770157593, 0.0, 0.0, 0.9326658306603458, 0.8378878055075136, 0.9718021228732235, 0.0, 0.13943222467463276, 0.37871885689734, 0.0] |
| 0.5934 | 2.0 | 400 | 0.5902 | 0.3208 | 0.3960 | 0.8377 | [nan, 0.7440051977102966, 0.8440706953976161, 0.4204044303634239, 0.6623077665370818, 0.3288867769184289, nan, 0.4128108829122007, 0.5091282974901243, 0.0, 0.8331495358783692, 0.0, 0.0, nan, 0.0, 0.5505261883327806, 0.0, 0.0, 0.6959945605841311, 0.0, 0.35329168342760847, 0.3475993707279731, 0.0, nan, 0.0, 0.3153479137980723, 0.06727728384748494, 0.0, 0.8455988502849932, 0.7326409266642869, 0.9108508924343527, 0.0, 0.07711873028792349, 0.294467970434247, 0.0] | [nan, 0.8538993076218143, 0.9242446392700366, 0.8299598947236496, 0.8094037363416755, 0.4622096456830656, nan, 0.49143335196901383, 0.7653462099379662, 0.0, 0.9432592666990843, 0.0, 0.0, nan, 0.0, 0.6229552142780551, 0.0, 0.0, 0.9340465915723392, 0.0, 0.4201814271018423, 0.409244881297325, 0.0, nan, 0.0, 0.467399441250391, 0.07036979247932595, 0.0, 0.8997076812997896, 0.8993207282328073, 0.9695394852110247, 0.0, 0.08241549150294109, 0.4213428371552432, 0.0] |
| 0.3099 | 2.5 | 500 | 0.5530 | 0.3282 | 0.3932 | 0.8466 | [nan, 0.753770869191511, 0.8508589285312731, 0.30929137630914455, 0.696552091628233, 0.3534600891380197, nan, 0.41878316671264637, 0.5044062846208077, 0.0, 0.829636976973291, 0.0, 0.0, nan, 0.0, 0.5874845884809904, 0.0, 0.0, 0.7289537883527962, 0.0, 0.45381927455818005, 0.37699029029640674, 0.0, nan, 0.0, 0.33039460633101664, 0.1407762830948505, 0.0, 0.8460394900511126, 0.7336367978399765, 0.9196628742640434, 0.0, 0.07678934827737838, 0.2617393791651902, 0.0] | [nan, 0.9214577936159987, 0.9215509251065084, 0.3135026319087605, 0.7789577731133583, 0.5102712446015579, nan, 0.5287390347790063, 0.8387330872450439, 0.0, 0.9548222519180118, 0.0, 0.0, nan, 0.0, 0.6954475593628251, 0.0, 0.0, 0.9171756444370913, 0.0, 0.6437216673967722, 0.42470264299550164, 0.0, nan, 0.0, 0.41822731832547705, 0.15381494772975504, 0.0, 0.9245154335130445, 0.8548593878663142, 0.9695560521164723, 0.0, 0.08164352581735944, 0.3377088371221928, 0.0] |
| 0.4902 | 3.0 | 600 | 0.5306 | 0.3521 | 0.4278 | 0.8486 | [nan, 0.7832578914656015, 0.8481228702494465, 0.6003682514159374, 0.6661039098735899, 0.37112717372069615, nan, 0.4808895792761392, 0.5497646350005633, 0.07778522847015998, 0.8521756663906362, 0.0, 0.0, nan, 0.0, 0.5862965602185906, 0.0, 0.0, 0.7129582804352333, 0.04332344213649852, 0.4221290717411931, 0.4220941361635153, 0.0, nan, 6.551791915088777e-05, 0.34768851272360923, 0.27066178293079235, 0.0, 0.8501056955438098, 0.7403010946360047, 0.9217270729782195, 0.00033824233716475097, 0.09757339379034737, 0.26892375127539414, 0.0] | [nan, 0.8481153536121097, 0.9269861687254204, 0.6551220391026444, 0.8848748211775757, 0.5658720495747022, nan, 0.6066372517868132, 0.783167840121695, 0.10274579273693533, 0.9524974880383263, 0.0, 0.0, nan, 0.0, 0.740102224393266, 0.0, 0.0, 0.83904422476826, 0.043419665712995034, 0.6538906416888569, 0.6353946883954729, 0.0, nan, 6.613319224918987e-05, 0.4549020030849882, 0.30669371196754563, 0.0, 0.9289640042399235, 0.862338618414884, 0.9761175507286175, 0.00033941884285207944, 0.11043458406271406, 0.38256096418990754, 0.0] |
| 0.3274 | 3.5 | 700 | 0.5292 | 0.3548 | 0.4247 | 0.8551 | [nan, 0.7972343936137507, 0.8511882023426258, 0.5682452125511459, 0.7871666724887183, 0.3491397313018397, nan, 0.46845668159136694, 0.538213455682926, 0.027450488844321885, 0.8390158319389796, 0.0, 0.0, nan, 0.0, 0.5995417277941603, 0.0, 0.0, 0.7156886545934115, 0.0, 0.47737212538023327, 0.39745158249812973, 0.0, nan, 1.3016596160104132e-05, 0.369974938397643, 0.31632183908045974, 0.0, 0.8555087190725136, 0.7534152192687842, 0.9278741823852267, 1.1971567527123083e-05, 0.14131094477038236, 0.21719307816260425, 0.0] | [nan, 0.9434186147057789, 0.9078035212947879, 0.7109051886201279, 0.8169525116815372, 0.5774467572961187, nan, 0.529389004412232, 0.8128240375691688, 0.02771099582437049, 0.957308287615835, 0.0, 0.0, nan, 0.0, 0.7279139313500054, 0.0, 0.0, 0.9118048374717936, 0.0, 0.6690860228285105, 0.47117357965005524, 0.0, nan, 1.3226638449837974e-05, 0.49332412870657016, 0.38645654548291464, 0.0, 0.9162333361330154, 0.9110963503880027, 0.9702473438983334, 1.201482629564883e-05, 0.15189240320528852, 0.27303887276152494, 0.0] |
| 0.3482 | 4.0 | 800 | 0.5299 | 0.3594 | 0.4329 | 0.8592 | [nan, 0.8016110491926162, 0.8608087132212014, 0.6332727179204778, 0.6918977917255988, 0.374578193250605, nan, 0.504478044741829, 0.5343956471045472, 0.03898950282616219, 0.8428180965929765, 0.00955670696603339, 0.0, nan, 0.0, 0.6097551895246858, 0.0, 0.0, 0.7332759700598123, 0.0, 0.3848409265906707, 0.402269687694107, 0.0, nan, 0.0, 0.36917384540878473, 0.31635790704175243, 0.0, 0.8579305971350791, 0.7629159580737742, 0.9266178634949399, 0.0006339304581263634, 0.21429928777202623, 0.27194364001007476, 0.0] | [nan, 0.8978664601691193, 0.9284523849533335, 0.8197142499060033, 0.8830900348247946, 0.5166203443247434, nan, 0.5989687362997582, 0.8347347340362433, 0.042768568897886876, 0.9646279867438649, 0.009664198220856039, 0.0, nan, 0.0, 0.6829733239607783, 0.0, 0.0, 0.9205716636105021, 0.0, 0.44822042949888513, 0.5753982102550147, 0.0, nan, 0.0, 0.4341570755174906, 0.38848494304883757, 0.0, 0.9241305658299162, 0.9139338441100537, 0.9703487534407705, 0.0006397895002433002, 0.30653561371272003, 0.3583336546565239, 0.0] |
| 0.1675 | 4.5 | 900 | 0.5478 | 0.3789 | 0.4495 | 0.8648 | [nan, 0.8187697966825732, 0.8569421945742838, 0.4718932164660768, 0.7866386815553748, 0.3287548955120063, nan, 0.4964917770446083, 0.5950793087721077, 0.07595230901526948, 0.8421749307256715, 0.12204177365747379, 0.0, nan, 0.0, 0.6086482629181411, 0.0, 0.0, 0.7500690267952569, 0.0911005094678564, 0.4767013423960269, 0.453753413689856, 0.0, nan, 0.0, 0.3583316856798218, 0.36664125840205114, 0.0, 0.8549937353467479, 0.7319751659947226, 0.9254258491318288, 0.04387746179182496, 0.3819026790595954, 0.30866132685552966, 0.0] | [nan, 0.9385279050957132, 0.9486276921695684, 0.5026867401930066, 0.8340007009130622, 0.38945418006655863, nan, 0.5705245206675059, 0.8193972465092911, 0.2067569277489561, 0.9652840976596805, 0.12783521959852825, 0.0, nan, 0.0, 0.7418238356784578, 0.0, 0.0, 0.8845420119213394, 0.0931636801686493, 0.6070050746430277, 0.5298397553908464, 0.0, nan, 0.0, 0.5043653661535806, 0.4953346855983773, 0.0, 0.9289149655054365, 0.9327421617311079, 0.9730100008885886, 0.054258955551150116, 0.47086644993639437, 0.4143526806616687, 0.0] |
| 0.2059 | 5.0 | 1000 | 0.5274 | 0.3811 | 0.4517 | 0.8633 | [nan, 0.8144690802407097, 0.8475277746013036, 0.5998484155552933, 0.7424634092861925, 0.3939451407757042, nan, 0.5213080789213621, 0.5368567211118558, 0.09527268953349592, 0.8520652557761808, 0.10390955769750473, 0.0, nan, 0.0, 0.6144961951535431, 0.0, 0.0, 0.7525293382321487, 0.04464818292278803, 0.46562685256810243, 0.40224828688784203, 0.0, nan, 0.004838617863038652, 0.37233234835426415, 0.3945462774383345, 0.0, 0.8537676770046194, 0.7493898431532764, 0.9292629799252149, 0.005978842726186204, 0.42302670511652374, 0.2930787723047587, 0.0] | [nan, 0.8903022233193745, 0.9432517251858782, 0.6369865427998496, 0.8508617855281039, 0.5337380590518894, nan, 0.6331154701621304, 0.8552568226698951, 0.1161584208528407, 0.9565452065320394, 0.10744725443621629, 0.0, nan, 0.0, 0.7461070139277757, 0.0, 0.0, 0.9095271906724941, 0.04564824574612257, 0.5884036285420369, 0.5123345126226647, 0.0, nan, 0.005621321341181139, 0.4697250477310235, 0.544078639413325, 0.0, 0.9335369659030622, 0.8812235246935758, 0.9712689697342719, 0.0060554724530070105, 0.48965457253759254, 0.3750585267239907, 0.0] |
| 0.2476 | 5.5 | 1100 | 0.5155 | 0.3905 | 0.4616 | 0.8675 | [nan, 0.8265214061771213, 0.8600576520403186, 0.594433602779792, 0.7544886232553043, 0.37068330207725636, nan, 0.5232258682361954, 0.5599808781073076, 0.053802970838094294, 0.8557553930716002, 0.260455134699391, 0.0, nan, 0.0, 0.6244918831761802, 0.0, 0.0, 0.7538912203861485, 0.12396326808463344, 0.46784958861953346, 0.42754713070206923, 0.0, nan, 0.04062048012978963, 0.37201807840244744, 0.37977998952331066, 0.0, 0.8599327444399775, 0.7500016346499637, 0.9344004094672925, 0.009735139739398683, 0.37616607541715935, 0.3258196784917233, 0.0] | [nan, 0.8924850660486451, 0.9502637392405893, 0.7175828737937084, 0.8435705394821994, 0.4972478091351577, nan, 0.6504603281220462, 0.833736890216246, 0.06851828419587498, 0.9623820214993706, 0.27838712682222533, 0.0, nan, 0.0, 0.7129317431761047, 0.0, 0.0, 0.9219604950526901, 0.1343622948351152, 0.5432565328030635, 0.5415291273792563, 0.0, nan, 0.07020699689173997, 0.48637751194623924, 0.5203619909502263, 0.0, 0.9261419513216803, 0.9051215067449078, 0.9733202538451536, 0.010161539339544999, 0.4358018114011721, 0.4336582369822795, 0.0] |
| 0.2348 | 6.0 | 1200 | 0.5316 | 0.3868 | 0.4558 | 0.8680 | [nan, 0.8211971818768324, 0.8584148743404417, 0.6162334518315056, 0.8107196876075417, 0.36629569046956345, nan, 0.5449900586325834, 0.5641672402900492, 0.04433827042522695, 0.8546484490871338, 0.15751752906718736, 0.0, nan, 0.0, 0.6140825756141491, 0.0, 0.0, 0.7459747398434599, 0.04483344688943629, 0.47622577353244944, 0.4313669657743392, 0.0, nan, 0.012837260006202483, 0.3694298425602043, 0.38668851213373223, 0.0, 0.8585649824802754, 0.7467589374043503, 0.9321821807468248, 0.060375777150965176, 0.3665556591625594, 0.3059348320459723, 0.0] | [nan, 0.9073706798766256, 0.9527124074951694, 0.6554901930066425, 0.8570312373032433, 0.44864601613864796, nan, 0.6733846283790578, 0.8349859394734455, 0.05871188156396305, 0.9598935361030441, 0.16531600763820967, 0.0, nan, 0.0, 0.7149273825551332, 0.0, 0.0, 0.8796542168176366, 0.04508357175124228, 0.684946866371015, 0.5529351497590728, 0.0, nan, 0.018067588122478672, 0.4894085667748929, 0.5081916055546887, 0.0, 0.9292596985061393, 0.8840071742684391, 0.9790930673525031, 0.07400231886147506, 0.44878388222630555, 0.40923951063396147, 0.0] |
| 0.1532 | 6.5 | 1300 | 0.5424 | 0.3892 | 0.4649 | 0.8641 | [nan, 0.8440444249361324, 0.853539056175834, 0.5820969163614399, 0.7915526508865678, 0.33789632579232626, nan, 0.5226035072400494, 0.5703361567336003, 0.03651161476290058, 0.8547911765703661, 0.2024593389994575, 0.0, nan, 0.0, 0.5951762806527621, 0.0, 0.0, 0.7569954432918928, 0.18283476047497252, 0.4906970637669994, 0.4038398553833889, 0.0, nan, 0.02376261597895594, 0.3822135016541395, 0.4062111801242236, 0.0, 0.8590400868675536, 0.7445913284904555, 0.9359189571939001, 0.011945571936318324, 0.3842186160621915, 0.29259119465953176, 0.0] | [nan, 0.9395732644517828, 0.9180173690689264, 0.649198286126081, 0.8380144908149969, 0.5814201226492461, nan, 0.577744170987061, 0.8556057191104537, 0.04087055548525876, 0.9567338333167978, 0.22161287317777467, 0.0, nan, 0.0, 0.6500542099055199, 0.0, 0.0, 0.8955991231335567, 0.22281282939316369, 0.7248106237725614, 0.5092608897497879, 0.0, nan, 0.028556312413200186, 0.4765508537650879, 0.571446403495085, 0.0, 0.9247917249195445, 0.9172034835306536, 0.9737715765117426, 0.013648842671857071, 0.565318082480728, 0.36082207325067067, 0.0] |
| 0.2185 | 7.0 | 1400 | 0.5632 | 0.3810 | 0.4618 | 0.8709 | [nan, 0.8474117548034618, 0.8688824915370316, 0.6552065621868585, 0.7522678771604911, 0.37353909328015567, nan, 0.5365654787347347, 0.592498198542391, 0.07646075049649838, 0.8538463869307471, 0.23429532385808832, 0.0, 0.0, 0.0, 0.6607725008476123, 0.0, 0.0, 0.74960441218283, 0.07219328703703703, 0.4595713976780367, 0.3781036565468046, 0.0, nan, 0.05457773228522763, 0.39299487530383664, 0.4051196438508626, 0.0, 0.8604615266166045, 0.7381663205688781, 0.9366448408437466, 0.006892002296414061, 0.4253769236778377, 0.2617610309951678, 0.0] | [nan, 0.9163777332940749, 0.9546309886190651, 0.7644226250156662, 0.8713795360888967, 0.4961842145774031, nan, 0.6119367566676186, 0.8549009483005254, 0.09255978742249779, 0.9664146335189796, 0.25144380792697124, 0.0, nan, 0.0, 0.7546495418965127, 0.0, 0.0, 0.9323558202699536, 0.07889625056467399, 0.6464347064237794, 0.4229080954744105, 0.0, nan, 0.09027180742014417, 0.4733191668374556, 0.5906693711967546, 0.0, 0.9238229108967523, 0.8670764391937118, 0.9699948240971162, 0.007500255315058783, 0.4736390027508073, 0.304657074710396, 0.0] |
| 0.1438 | 7.5 | 1500 | 0.5517 | 0.3975 | 0.4857 | 0.8739 | [nan, 0.8274787289578202, 0.8682251104965265, 0.6519304033750671, 0.8134267015717217, 0.39418015163423703, nan, 0.5661808224366917, 0.566490424375331, 0.09013833110218653, 0.8648792356449658, 0.3021073477885543, 0.0, 0.0, 0.0, 0.6372256392482658, 0.0, 0.0, 0.7688377835128294, 0.29360007348695605, 0.5090966205073165, 0.41193710138217193, 0.0, nan, 0.1092829817292887, 0.38358305564212597, 0.4100601421541826, 0.0, 0.8643641021688231, 0.7583952955140961, 0.935368692211271, 0.02093306511362255, 0.34996075695473966, 0.3218192076389453, 0.0] | [nan, 0.921843390549979, 0.9403607207107609, 0.76771642749718, 0.8523752910226872, 0.5351028188054714, nan, 0.6925748209164733, 0.8508049040883685, 0.11501961280526382, 0.962947901853646, 0.34636253551301754, 0.0, nan, 0.0, 0.7338234067649196, 0.0, 0.0, 0.9112829916403561, 0.3368995633187773, 0.6509246088193457, 0.5184273296300447, 0.0, nan, 0.18211758481581905, 0.5177731994347784, 0.561710095178655, 0.0, 0.9224892965320445, 0.9205636368193705, 0.9792627526264821, 0.027411826193522806, 0.3926913333260848, 0.4147410226890895, 0.0] |
| 0.1448 | 8.0 | 1600 | 0.5863 | 0.3846 | 0.4756 | 0.8619 | [nan, 0.8285810036010995, 0.8425236422549568, 0.54409630128129, 0.7690224846961036, 0.3341831302668693, nan, 0.5525947152625418, 0.5797724617417785, 0.1321213041332979, 0.8653811387867731, 0.31424783788902216, 0.0, 0.0, 0.0, 0.6172061947474988, 0.0, 0.0, 0.7568663064400176, 0.05340165818785874, 0.4628885293296987, 0.4148772374679798, 0.0, nan, 0.09201743557250074, 0.40030361221113503, 0.41565400654679163, 0.0, 0.8627222941773598, 0.7587543062435463, 0.9358024466569654, 0.058416223547051546, 0.42476868061585576, 0.29020683480277026, 0.0] | [nan, 0.9298402517137067, 0.9077748082418758, 0.6459651898734177, 0.8549772536626041, 0.513064579782187, nan, 0.6949446359405856, 0.8449015763141184, 0.2202328229786157, 0.9635137822079214, 0.34777141260304595, 0.0, nan, 0.0, 0.7390656833426662, 0.0, 0.0, 0.9374391639436931, 0.05421623249510616, 0.5601510087760315, 0.5888052891926937, 0.0, nan, 0.11140797566298526, 0.5131263011423086, 0.5785301919176159, 0.0, 0.9210679706097176, 0.8946451067274549, 0.9804746468613496, 0.08823688431524501, 0.49464516760353583, 0.35886521502029844, 0.0] |
| 0.1743 | 8.5 | 1700 | 0.5965 | 0.3823 | 0.4690 | 0.8674 | [nan, 0.8114508570420983, 0.8575183217802477, 0.5578572053547992, 0.7936370815877491, 0.3693279684750451, nan, 0.5376387406939722, 0.555776058752852, 0.11493196832480203, 0.8669745949912785, 0.32189888152234425, 0.0, 0.0, 0.0, 0.623012118067214, 0.0, 0.0, 0.7627747448921709, 0.03247086042288994, 0.4779047144906555, 0.41786452911050836, 0.0, nan, 0.031317787418655096, 0.39853711367715, 0.42251074538449546, 0.0, 0.8589178127354131, 0.7447605563349396, 0.9359531086030232, 0.00975403551886743, 0.4174620599498881, 0.31231359405346676, 0.0] | [nan, 0.9430963185560249, 0.9299827727651977, 0.6492413679659105, 0.8299960787206211, 0.5104850830863275, nan, 0.6075864648650388, 0.8618021198947728, 0.16620270783246868, 0.9598527077946548, 0.37029015881887195, 0.0, nan, 0.0, 0.7295283142506523, 0.0, 0.0, 0.9192409682296765, 0.03339105556392109, 0.5745460630909547, 0.5731474218387308, 0.0, nan, 0.036664241782950864, 0.4929854272061441, 0.5920424403183023, 0.0, 0.9355143299421742, 0.8804529963803737, 0.9796819455370513, 0.011158769922083851, 0.5416480923749362, 0.4101153458447403, 0.0] |
| 0.1297 | 9.0 | 1800 | 0.5987 | 0.3823 | 0.4752 | 0.8690 | [nan, 0.8497441274348271, 0.8663967321828038, 0.3939597846738442, 0.7517827792707723, 0.36228590938428584, nan, 0.5669849704201076, 0.5934392867525923, 0.15914107044325915, 0.8666539203663902, 0.22562794987251125, 0.0, 0.0, 0.0, 0.6007060850673309, 0.0, 0.0, 0.7570039243252329, 0.1404517002818551, 0.46523950960168053, 0.42069989001765135, 0.0, nan, 0.07485775248933144, 0.4140864325679131, 0.4245368900179951, 0.0, 0.866773453924494, 0.7622557299541883, 0.9395483560035729, 0.06816082025895225, 0.3936561567488198, 0.2704748659774187, 0.0] | [nan, 0.8997314198752051, 0.9474687235161634, 0.40930293583155786, 0.9004136720551963, 0.5203384469840097, nan, 0.6671037337055107, 0.8546148532192674, 0.2189674807035303, 0.9660557526882378, 0.24212891807554376, 0.0, nan, 0.0, 0.7642643537106978, 0.0, 0.0, 0.8878934524548124, 0.17333232946845353, 0.7268798574571658, 0.5223333920309924, 0.0, nan, 0.11136829574763574, 0.5292804211116744, 0.6257762521454205, 0.0, 0.9251615753208382, 0.9111993969455515, 0.967934001464916, 0.09528658364421697, 0.5421808574255488, 0.3208545177122523, 0.0] |
| 0.1954 | 9.5 | 1900 | 0.5744 | 0.3918 | 0.4837 | 0.8734 | [nan, 0.844664360128614, 0.8672808293940846, 0.5972967943838281, 0.748119348487742, 0.4048434319633421, nan, 0.5637276127950774, 0.562888500166704, 0.18492870021307983, 0.8727070817891117, 0.20440343580848083, 0.0, 0.0, 0.004572324373754858, 0.6105063747895116, 0.0, 0.016036676646706587, 0.7653435049863937, 0.17963140105860595, 0.5033941690418074, 0.4149990275914687, 0.0, nan, 0.08071291832104367, 0.40104252651675515, 0.42368350154394546, 0.0, 0.8652029299967213, 0.7555529404955967, 0.939615984015598, 0.029941952824784727, 0.36865941215056564, 0.3291356554089041, 0.0] | [nan, 0.8996661883481046, 0.9546770130760198, 0.6914478631407445, 0.8602945251689046, 0.5424500180531181, nan, 0.6671021248697848, 0.8599948363326797, 0.2855244843730229, 0.9587189056706846, 0.22776070047971683, 0.0, nan, 0.004572324373754858, 0.725685963804463, 0.0, 0.016036676646706587, 0.9138235145432068, 0.21207649450383978, 0.658604005571771, 0.5328957689660141, 0.0, nan, 0.12045499636267443, 0.5293041517900482, 0.6379778436573569, 0.0, 0.9174103986570031, 0.9089732199635531, 0.9764544111393864, 0.038780855575780517, 0.4460765659487023, 0.40790097002880893, 0.0] |
| 0.1476 | 10.0 | 2000 | 0.6046 | 0.4007 | 0.5001 | 0.8732 | [nan, 0.8339844158564095, 0.8618893405438733, 0.6234776668729248, 0.7368422366459226, 0.36780464924129314, nan, 0.5729445145018915, 0.5874305693434899, 0.14640558426253286, 0.8743150824872861, 0.3174642328564381, 0.0, 0.0, 0.0, 0.6233650662251655, 0.0, 0.0, 0.7641416603272588, 0.1726332282362309, 0.5061025578435304, 0.4388798340494888, 0.0, nan, 0.17569248696911957, 0.411042586297857, 0.41699861254084053, 0.0, 0.8664827219693931, 0.763368194006914, 0.9371504311699966, 0.09780307875228564, 0.37286891510469083, 0.3529164063423448, 0.0] | [nan, 0.9182071121671969, 0.943946151265974, 0.949085098383256, 0.8610308589268696, 0.41963675446919635, nan, 0.6578698210572465, 0.8464576544390094, 0.20435277742629382, 0.9609330248346352, 0.35964789716361606, 0.0, nan, 0.0, 0.7177391490831974, 0.0, 0.0, 0.9059592909698382, 0.20240927571148923, 0.6286286650883264, 0.5737461379608433, 0.0, nan, 0.29513921036968455, 0.5311422006968191, 0.5814947729755032, 0.0, 0.9239163369952466, 0.9192959784920482, 0.9771878732260233, 0.130456983918155, 0.5357550585497918, 0.4605034124522835, 0.0] |
| 0.0882 | 10.5 | 2100 | 0.5877 | 0.4111 | 0.4907 | 0.8786 | [nan, 0.8447078385210316, 0.8761783230639205, 0.6790927035551466, 0.7921316616004757, 0.40848896481978597, nan, 0.5884832049273298, 0.5833752853142, 0.2103240954854317, 0.8746874654440124, 0.31704166106844983, 0.0, nan, 0.0038538162578790947, 0.6114377965852943, 0.0, 0.0, 0.7589331936701121, 0.10382944578179906, 0.47475485097911035, 0.4237992404870186, 0.0, nan, 0.05984116546107542, 0.41775094867462537, 0.4237080578006368, 0.0, 0.8663147491494264, 0.7606818575616426, 0.9410394952349884, 0.018366015353344412, 0.4075697766219291, 0.2980607511362481, 0.0] | [nan, 0.9263497444179121, 0.9545126144321117, 0.9465628524877804, 0.8488732259961237, 0.4920138043377864, nan, 0.7003382577113507, 0.8328716270436609, 0.37403517651524737, 0.962403660502817, 0.35717945135298773, 0.0, nan, 0.0038538162578790947, 0.7445462452194012, 0.0, 0.0, 0.9230145271217365, 0.11301008884204186, 0.6291149645365636, 0.5425616725631133, 0.0, nan, 0.07823556643079162, 0.5205928354924655, 0.5938523950694337, 0.0, 0.9265777182874877, 0.900698395722918, 0.9746461083084035, 0.0205453529655595, 0.4771074119578572, 0.36729168617557467, 0.0] |
| 0.143 | 11.0 | 2200 | 0.5930 | 0.3980 | 0.4882 | 0.8780 | [nan, 0.8427623474506273, 0.8682372747495882, 0.7318951503895347, 0.8107472662343904, 0.3998179949074298, nan, 0.5727623603219433, 0.5722402712893357, 0.22476030277544154, 0.8691576705814467, 0.25829443409457226, 0.0, 0.0, 0.0, 0.6175602388961351, 0.0, 0.0, 0.7598021112298146, 0.07576313498658263, 0.4720583356173276, 0.44584995736527955, 0.0, nan, 0.07228730102646337, 0.4222553166140466, 0.43237091416150597, 0.0, 0.8654320091793174, 0.7444121537352406, 0.9384177071214866, 0.010842584763099752, 0.3963729506514404, 0.3332296148785605, 0.0] | [nan, 0.9243319802043691, 0.9619321628157846, 0.8164537379370849, 0.8682087587463074, 0.4761421746030413, nan, 0.6603771915359152, 0.8419150227829376, 0.4226875869922814, 0.9691958578864573, 0.2873527083042243, 0.0, nan, 0.0, 0.7502650923951247, 0.0, 0.0, 0.9171434742845588, 0.07928775786779099, 0.5798589337835377, 0.542399987193238, 0.0, nan, 0.10553534819125719, 0.5352217200427152, 0.6579809642689968, 0.0, 0.9287904416728774, 0.8785359590710213, 0.9795815400494898, 0.011939733631301026, 0.5249475389516488, 0.41294253088833927, 0.0] |
| 0.1148 | 11.5 | 2300 | 0.5959 | 0.4104 | 0.4838 | 0.8762 | [nan, 0.8320962616181409, 0.866696457686687, 0.767753663768965, 0.8036321135613248, 0.40920292765122857, nan, 0.5796238901770943, 0.5771572508771601, 0.16306651634723787, 0.8667379194148915, 0.30482068147284724, 0.0, nan, 0.0, 0.6220760930036738, 0.0, 0.0, 0.758013892655171, 0.10440978829290362, 0.46978988415956846, 0.40607430127382554, 0.0, nan, 0.055563992750761876, 0.41288545925428366, 0.4344954757922724, 0.0, 0.8641718734724644, 0.7521097062956364, 0.9385787477415113, 0.011883712883830031, 0.40791510248757923, 0.3130635346242427, 0.0] | [nan, 0.9385921712513765, 0.9424299707443241, 0.8725286690061411, 0.8592956615481677, 0.5008136498366823, nan, 0.6738214272786142, 0.8585852947128233, 0.22877388333544224, 0.9707787714027096, 0.3410297610730753, 0.0, nan, 0.0, 0.7393754542313512, 0.0, 0.0, 0.9281640493949713, 0.11815238668875169, 0.581594451854878, 0.5192533657771303, 0.0, nan, 0.0725877918127108, 0.5052347719158208, 0.6323607427055703, 0.0, 0.9327434420992087, 0.8904359797731676, 0.9774393889723649, 0.01310216807540505, 0.5186522131495113, 0.38299199629835684, 0.0] |
| 0.1352 | 12.0 | 2400 | 0.5862 | 0.3998 | 0.4873 | 0.8769 | [nan, 0.8473441017458369, 0.8687143142848679, 0.7042935674605667, 0.8127306136626816, 0.4267320776579287, nan, 0.5795264599532142, 0.5785691839386137, 0.15629923139351698, 0.8725951022287428, 0.3148653446264645, 0.005758157389635317, 0.0, 0.0, 0.6319408011432055, 0.0, 7.48502994011976e-05, 0.7593087751172442, 0.10541613101559444, 0.4695284573307169, 0.42746657212930483, 0.0, nan, 0.08974104620733309, 0.40918663545435197, 0.43112610526780004, 0.0006285355122564425, 0.8588805310834182, 0.7247993495882862, 0.9398895561411722, 0.06332603400833277, 0.40195160961001836, 0.3114607517616148, 0.0] | [nan, 0.9155287580703328, 0.9575764344751716, 0.856284857124953, 0.8472830451130483, 0.5376672012628786, nan, 0.6646092339126483, 0.8607624085019084, 0.20713653043148172, 0.9625882044567365, 0.3614293698477016, 0.005758157389635317, nan, 0.0, 0.7534342868716715, 0.0, 7.48502994011976e-05, 0.9167498494896436, 0.11359734979671736, 0.5876830390762279, 0.5655674195975475, 0.0, nan, 0.1256398386350109, 0.5185066931299687, 0.6101419878296146, 0.0006297229219143577, 0.9369267518124398, 0.8400322879213653, 0.9806659193151542, 0.07583157616498759, 0.5137051091081078, 0.3899890382888714, 0.0] |
| 0.1 | 12.5 | 2500 | 0.6124 | 0.4003 | 0.4906 | 0.8746 | [nan, 0.8393813668701683, 0.8618141158473944, 0.6956031403608547, 0.7649827580740517, 0.3942994002736122, nan, 0.5773676484433024, 0.5748858805050278, 0.19648666232921275, 0.8711213254008829, 0.28047548105472614, 0.00038387715930902113, 0.0, 0.0, 0.6185566518899852, 0.0, 0.027317602183912344, 0.7598041960294646, 0.15221883233948136, 0.4757253581470973, 0.4264427392879843, 0.0, nan, 0.12259253388285646, 0.42583584754633336, 0.44177180529519555, 0.0, 0.8666757224466872, 0.7489125157033966, 0.9424918326347549, 0.016033407227921844, 0.4151325599497332, 0.31388195606851294, 0.0] | [nan, 0.9072000849526864, 0.9549807386752227, 0.84549285624765, 0.8546355481269328, 0.46917786939618616, nan, 0.665997659144019, 0.8708873832069165, 0.2674933569530558, 0.9676933761377319, 0.3134693307251642, 0.00038387715930902113, nan, 0.0, 0.7730392098459485, 0.0, 0.027339071856287426, 0.9105400909036596, 0.16528384279475983, 0.6290490089433816, 0.5209070389165479, 0.0, nan, 0.16366642417829508, 0.5371244889814145, 0.6545170853487283, 0.0, 0.931858751433951, 0.902081169123538, 0.9734126268937102, 0.018683054889733933, 0.481293423069814, 0.40387570851763516, 0.0] |
| 0.1051 | 13.0 | 2600 | 0.5907 | 0.4080 | 0.5025 | 0.8772 | [nan, 0.839346538293538, 0.8705390646369748, 0.7092229826797409, 0.7896692559152996, 0.4128814155744173, nan, 0.588169874893985, 0.606614710711823, 0.20876232201533407, 0.8751643542449614, 0.334043830681477, 0.011772232885476647, 0.0, 0.04836498923046798, 0.6166429623226571, 0.0, 0.031123164687178528, 0.7582015428502734, 0.1235572113560316, 0.4834633504215781, 0.4260706421487994, 0.0007942811755361397, nan, 0.1352318974122706, 0.42553064359097825, 0.4403052064631957, 0.009208103130755065, 0.8673138312050992, 0.7599544866182125, 0.9391072396315598, 0.012793603451462542, 0.39450023124740335, 0.3369132617783203, 0.0] | [nan, 0.9206268846291491, 0.9426922683628182, 0.9425895318962276, 0.8549309984010681, 0.528078056644807, nan, 0.7168529564367506, 0.8532750908875227, 0.3014677970390991, 0.9663664561150803, 0.388675422663127, 0.011772232885476647, nan, 0.048401319442176426, 0.7309341975146844, 0.0, 0.031137724550898204, 0.9217302486752791, 0.13483662099081464, 0.6370483395433314, 0.5191477099908752, 0.0007942811755361397, nan, 0.1863501091197672, 0.5223726363705006, 0.612264003744734, 0.009445843828715366, 0.9253587934291817, 0.9012266253737758, 0.9821730056834527, 0.015177729317978386, 0.547182325247627, 0.4260855674475738, 0.0] |
| 0.0868 | 13.5 | 2700 | 0.5981 | 0.4158 | 0.5078 | 0.8760 | [nan, 0.8402386099445845, 0.8688193406598229, 0.683493036323769, 0.8140580249216932, 0.39290409952896677, nan, 0.5939072237286935, 0.5836577532012318, 0.20439788073138668, 0.8742768017030499, 0.3201047234646013, 0.2630714826976509, 0.0, 0.0007798030997173213, 0.6219356838734991, 0.0, 0.07758153443603402, 0.7617670161697484, 0.16171143884981024, 0.47109287878114675, 0.4107796658502182, 0.038847440185232826, nan, 0.11356285621611603, 0.41765504783680923, 0.44460421956096796, 0.0049504950495049506, 0.869711105990263, 0.7686378952749437, 0.9419531763809376, 0.03723473940208135, 0.4151841732893109, 0.30995537296084896, 0.0] | [nan, 0.9164526185355453, 0.9454859708083166, 0.7984239879684171, 0.8600215774544354, 0.5570548506909166, nan, 0.6883516271362322, 0.860985702223866, 0.3246235606731621, 0.967159750147084, 0.34167015975036097, 0.2665387076135637, nan, 0.0007838270355008328, 0.7648838954880679, 0.0, 0.07767589820359282, 0.9007383050006205, 0.19697334738744166, 0.6366801695157186, 0.5165511390013927, 0.03997881916865237, nan, 0.16207922756431453, 0.5367124381114695, 0.6175066312997347, 0.005037783375314861, 0.9314196620606581, 0.9142935787141089, 0.9742806323336796, 0.04695394116339563, 0.5072466919639459, 0.38669639365212266, 0.0] |
| 0.125 | 14.0 | 2800 | 0.5948 | 0.4057 | 0.4961 | 0.8773 | [nan, 0.8445778236245067, 0.8701865682806943, 0.6971433362679746, 0.8162667275611865, 0.4046509945596555, nan, 0.5918829139806143, 0.5924214860206817, 0.19658684496967813, 0.8716491091461146, 0.2758869603628646, 0.006397952655150352, 0.0, 0.0, 0.6393531382804838, 0.0, 0.0625022992311371, 0.759156141783712, 0.15955400594740152, 0.47517050211755907, 0.42159514639302753, 0.0, nan, 0.08517434367981148, 0.4140500715960596, 0.4465206321149759, 0.03508771929824561, 0.8659773213342542, 0.74311996863367, 0.9432031798216328, 0.04120414022243527, 0.39472315188929546, 0.32904448783392143, 0.0] | [nan, 0.9237439310890286, 0.9530585161870283, 0.8140979446045871, 0.8535233382797294, 0.5323257174365132, nan, 0.6811094531165159, 0.8634838007382649, 0.3199417942553461, 0.9670846260596477, 0.2981579805318802, 0.006397952655150352, nan, 0.0, 0.7456542718596976, 0.0, 0.06358532934131736, 0.9128926482009991, 0.19673241981629272, 0.6329817342383359, 0.5391918933196728, 0.0, nan, 0.12238608557635076, 0.5283657113270842, 0.6621937899828366, 0.036523929471032744, 0.9295248266234878, 0.8648070940592267, 0.9750763458226046, 0.04686082625960435, 0.49498222304371936, 0.41401666841099255, 0.0] |
| 0.1189 | 14.5 | 2900 | 0.6029 | 0.4095 | 0.4986 | 0.8801 | [nan, 0.8514875831701844, 0.8716700148163586, 0.6962603196302648, 0.817032646324371, 0.4175149712810631, nan, 0.5798137255042309, 0.5935370592841545, 0.19969243294585914, 0.8764088194545395, 0.3221499629424721, 0.0014075495841330773, 0.0, 0.04305845511482255, 0.6216180494705874, 0.0, 0.00396676895441958, 0.7647583732615805, 0.19077376449616176, 0.474612932863853, 0.4373168376281614, 0.0226503514709711, nan, 0.08409752884387768, 0.4126249821002889, 0.43868479539809124, 0.060298507462686564, 0.8677737696127611, 0.7584694831849964, 0.941287345865792, 0.030366344888549588, 0.40450211257045576, 0.31937043297977213, 0.0] | [nan, 0.9355140154693877, 0.9556994604154749, 0.869700933700965, 0.8533749880715598, 0.5157151693774333, nan, 0.6516146677553122, 0.8515375866135414, 0.2793243072251044, 0.9648994949946536, 0.35933351963113036, 0.0014075495841330773, nan, 0.0431104869525458, 0.7564724244337746, 0.0, 0.003967065868263473, 0.9110074772625958, 0.21835566932690859, 0.6158135917742547, 0.5864632525973714, 0.023034154090548053, nan, 0.10807486277362609, 0.5283915993398557, 0.6282727414573256, 0.06360201511335013, 0.9286722968626905, 0.897458463418008, 0.9807648187204023, 0.03632081989174642, 0.49860285083665856, 0.389474003117753, 0.0] |
| 0.1579 | 15.0 | 3000 | 0.6139 | 0.4104 | 0.5025 | 0.8781 | [nan, 0.849444582971672, 0.8696833699733478, 0.6773222386208271, 0.8047162061925254, 0.4054382649905792, nan, 0.5827963265570214, 0.5747304626394523, 0.16491521875654178, 0.8732081803169777, 0.3306558724938334, 0.10518253765636967, 0.0, 0.0, 0.6177997065224885, 0.0, 0.009308040285198355, 0.7623881099549857, 0.16173743617307176, 0.4654705881500398, 0.4455329770786528, 0.04107142857142857, nan, 0.1581649569472214, 0.4217185969419198, 0.4390528453528065, 0.07096774193548387, 0.869469463943758, 0.7565380479336926, 0.9414248900266803, 0.027585922094547858, 0.40787947922657036, 0.29824110529724396, 0.0] | [nan, 0.9278780653128941, 0.9521817831140148, 0.8789341552826169, 0.858512239100531, 0.5049527120261756, nan, 0.681092560341395, 0.8562825782051372, 0.24920916107807162, 0.9680824699166817, 0.3652368310744725, 0.1054382597568778, nan, 0.0, 0.7474056688072629, 0.0, 0.0093562874251497, 0.9157413152077503, 0.181245294383376, 0.6323152874236465, 0.5660380680999568, 0.042626423087106166, nan, 0.2082137424773494, 0.5609716634126873, 0.6341706974567015, 0.07619647355163728, 0.9252003810402697, 0.8931198320062608, 0.9758770795859076, 0.032103615861973675, 0.47040979417872636, 0.3595716670063512, 0.0] |
| 0.1238 | 15.5 | 3100 | 0.6100 | 0.4102 | 0.5073 | 0.8781 | [nan, 0.8457976228705318, 0.8722143139864557, 0.6467625512114819, 0.8036633568527239, 0.41250410180941544, nan, 0.5999855146454744, 0.5921621095650481, 0.23485276321097218, 0.8691890048996691, 0.32643325227712977, 0.0, 0.0, 0.01147476854870257, 0.6283268551760971, 0.0, 0.0050652312063100445, 0.7637068579243205, 0.17657812266825848, 0.46699798790003505, 0.4367257117024686, 0.04856343757945589, nan, 0.15772383135375426, 0.41318684668694494, 0.4438589727113053, 0.07359813084112149, 0.8675842693091819, 0.7587597375723992, 0.9415232642305689, 0.025147107805040526, 0.39033231029738596, 0.3145995284961833, 0.0] | [nan, 0.9251627512810134, 0.9506669158712923, 0.91784488971049, 0.8460945765914206, 0.5298833684599431, nan, 0.6797097660350646, 0.8493046493939669, 0.36834113627736303, 0.9701059208804543, 0.36888128172884355, 0.0, nan, 0.011496129854012214, 0.7569072951044286, 0.0, 0.005071107784431138, 0.9027535352699765, 0.21380816142147266, 0.6047783350642575, 0.5737573438775674, 0.050569234842467566, nan, 0.24027511407975663, 0.5263184009837445, 0.6466531440162272, 0.07934508816120907, 0.9338802370444713, 0.8961272132961623, 0.9736440615425396, 0.02857426063762683, 0.48121731377686927, 0.3954671396544031, 0.0] |
| 0.0913 | 16.0 | 3200 | 0.6129 | 0.4089 | 0.5001 | 0.8791 | [nan, 0.8433245410968181, 0.8749301149814263, 0.6736973846465316, 0.8106924973311235, 0.42072894436005037, nan, 0.5891124858838177, 0.5899950432018972, 0.17118658567884334, 0.870627521516266, 0.31652784767225706, 0.014843250159948817, 0.0, 0.011985799433280136, 0.6317023849232822, 0.0, 0.005123123235420601, 0.7607780371005064, 0.15899869334983838, 0.46899700402072525, 0.43072967204752066, 0.05788570026980623, nan, 0.0758988546010011, 0.4210143360012586, 0.4484633727797464, 0.0855457227138643, 0.8688803083222512, 0.7645111130933738, 0.9428612870848608, 0.04721638052528865, 0.4006294686544006, 0.32925395958492626, 0.0] | [nan, 0.9427984325126068, 0.9457704150850859, 0.9318445293896478, 0.8430242273391931, 0.5337985160056986, nan, 0.6634444368471646, 0.8471763811065599, 0.25319498924459066, 0.9685638356725913, 0.3588561315262447, 0.014843250159948817, nan, 0.01201868121101277, 0.7220401987299394, 0.0, 0.005127245508982036, 0.9132189454624, 0.19147718717060683, 0.6253456516068062, 0.5663598379944611, 0.06248345247550966, nan, 0.10087957145691423, 0.5080500933047127, 0.6279606802933375, 0.09130982367758186, 0.9267272266243383, 0.9030596472465682, 0.9757415321776997, 0.058064651780296885, 0.49409065704065325, 0.41510182272874996, 0.0] |
| 0.0898 | 16.5 | 3300 | 0.6204 | 0.4163 | 0.5148 | 0.8795 | [nan, 0.8455456130477345, 0.8700194472920203, 0.6946681417690085, 0.7814703425745662, 0.4194883785274432, nan, 0.5948668398268399, 0.5947327593268031, 0.2608856088560886, 0.8760276679112057, 0.33157301494531183, 0.04423986702467715, 0.0, 0.060933867217773184, 0.6305274279734492, 0.0, 0.0093937125748503, 0.7650287950362539, 0.15797616907629491, 0.49225995343513235, 0.4316986067360807, 0.08065302144249513, nan, 0.10073918830892974, 0.42336167879328546, 0.45230667838312827, 0.09771796372147455, 0.8706074274990901, 0.764482163206971, 0.9431735200640009, 0.021280588296084372, 0.3626886537189586, 0.3441856620486082, 0.0] | [nan, 0.9303200413178975, 0.9533137101604152, 0.8545752913898985, 0.8437101386949433, 0.5160219324393541, nan, 0.6908654329578042, 0.8613206428068021, 0.5367581930912312, 0.9685021849269234, 0.3699175632248149, 0.04427383237364044, nan, 0.06162840066625298, 0.742997390775976, 0.0, 0.0093937125748503, 0.9099798706759868, 0.18526577322692367, 0.6223205540269827, 0.5315814750188099, 0.08763568970082075, nan, 0.14114145889822102, 0.5579686539312027, 0.6425027305351849, 0.10516372795969774, 0.9288878546603543, 0.9017738861456224, 0.9771698002382623, 0.023504003940863025, 0.5246648472921401, 0.4357913639343179, 0.0] |
| 0.0835 | 17.0 | 3400 | 0.6076 | 0.4267 | 0.5170 | 0.8812 | [nan, 0.8460581108674733, 0.8728925994222388, 0.7259225550893365, 0.8050173172700429, 0.4131630100229764, nan, 0.5899979600438697, 0.5868946143041311, 0.17549615228837587, 0.8813630545199417, 0.31715007715186616, 0.26487305797650623, 0.0, 0.02885711493990815, 0.6199329850405956, 0.0, 0.05075924336827967, 0.7650305346258475, 0.2522327454642623, 0.4966318686177794, 0.4529148940449969, 0.09051201164765833, nan, 0.11675144897995356, 0.41781253880389246, 0.44865114134194145, 0.0834319526627219, 0.8711822823395454, 0.7707952870250171, 0.9415349599030014, 0.04012992959617092, 0.380938021240706, 0.34680758963243546, 0.0] | [nan, 0.934615116751332, 0.9551586879698815, 0.8778022778543677, 0.847407642619348, 0.5098486057115028, nan, 0.6677183089527686, 0.8513491825356397, 0.2741364038972542, 0.9642315438694049, 0.3302594196823623, 0.26833013435700576, nan, 0.028936281393905746, 0.7450526014797517, 0.0, 0.05091691616766467, 0.9037363334298438, 0.3446845354615269, 0.6136291819045416, 0.5767253109641891, 0.09875562615832671, nan, 0.17424773493816548, 0.5516800241621452, 0.6072086128881261, 0.08879093198992444, 0.9307757794899546, 0.9106089680212183, 0.977723536502164, 0.04764779738196935, 0.4395202939993259, 0.45566290810340365, 0.0] |
| 0.1225 | 17.5 | 3500 | 0.6127 | 0.4209 | 0.5094 | 0.8803 | [nan, 0.8435910344871903, 0.8695606625536554, 0.7182204261354754, 0.8113890595231291, 0.41067953112018174, nan, 0.5893884210873966, 0.5981439431335769, 0.20692640692640693, 0.8743069522555236, 0.3310757516800404, 0.02431222008957134, 0.0, 0.05927826680092247, 0.6423157082207642, 0.0, 0.03770941894996358, 0.7670040697310434, 0.2005599819603127, 0.49211383058462227, 0.4313709240924947, 0.17281956622058145, nan, 0.10746176945719071, 0.42519060537428344, 0.4439287823812337, 0.10675129832660127, 0.8718269880223831, 0.7673723020642185, 0.9428251760694473, 0.019676852535376745, 0.3937994425451692, 0.3096574358286229, 0.0] | [nan, 0.941440844451769, 0.9518331075587977, 0.8547672014036847, 0.8460041496386881, 0.5016578080435737, nan, 0.6747062869278075, 0.8455295899071238, 0.28729596355814246, 0.9687087761673732, 0.3665525592659867, 0.02431222008957134, nan, 0.05960351415787583, 0.7435573612285991, 0.0, 0.037780688622754494, 0.9093148676657797, 0.24107062189429304, 0.6379136375494052, 0.5333055853490644, 0.19830553349218957, nan, 0.14341644071159315, 0.5528298833960759, 0.6567015134966453, 0.11649874055415617, 0.9343690296337485, 0.9059036394001762, 0.9789374388467827, 0.02273805876451541, 0.4746719145836278, 0.3813491167284525, 0.0] |
| 0.0675 | 18.0 | 3600 | 0.6238 | 0.4252 | 0.5210 | 0.8804 | [nan, 0.8516800972707018, 0.8692050916694961, 0.7237727574140093, 0.8120214843516569, 0.4163426716790552, nan, 0.584305343639795, 0.5663565649422807, 0.2131134939851647, 0.877850785414902, 0.329142403388036, 0.011132437619961612, 0.0, 0.01538113548970821, 0.6324650255716304, 0.0, 0.03667308087507239, 0.7676525198000297, 0.21727817417024975, 0.47684216325461737, 0.448149829738933, 0.19549897704023642, nan, 0.11738475848908238, 0.4253323951224734, 0.45890013826323534, 0.17288491854965843, 0.8713127069429203, 0.7603059372569354, 0.9432218968661672, 0.0798467739092629, 0.4126448875396716, 0.3191077401015053, 0.0] | [nan, 0.937414418098232, 0.9542076949429747, 0.8635167314199774, 0.8493716160213224, 0.507255674137019, nan, 0.6709400024936953, 0.8606577395697409, 0.3508161457674301, 0.9685920072053799, 0.3764496297331284, 0.011132437619961612, nan, 0.015545902870766518, 0.7573183372451836, 0.0, 0.03673278443113773, 0.8972149839379024, 0.28040204788435474, 0.6219947137084271, 0.6004434341332223, 0.22769393698702675, nan, 0.16190728126446663, 0.5229119699699052, 0.6421594632547979, 0.20717884130982367, 0.9342332096157383, 0.8874954394935679, 0.9775438106794289, 0.10205393455524117, 0.5202287627890794, 0.4062580904589046, 0.0] |
| 0.0782 | 18.5 | 3700 | 0.6283 | 0.4350 | 0.5161 | 0.8812 | [nan, 0.859142309112726, 0.8747387791460813, 0.6980637889746848, 0.7809171034188555, 0.41493542744772377, nan, 0.5982203562745756, 0.5623211540467369, 0.18670331421287445, 0.8781650062939121, 0.31771374086237947, 0.09946373850868233, nan, 0.04330017554125219, 0.6316368038986236, 0.0, 0.014273154123875264, 0.7651111984484834, 0.2139290766652005, 0.48905287453472684, 0.4246307700378346, 0.13867772402672196, nan, 0.12433210845168338, 0.4270575047265305, 0.45640347979297435, 0.16459122902003248, 0.871131405629457, 0.7615735652261609, 0.9429281489231311, 0.020402252272600452, 0.40892736979049843, 0.315161139530489, 0.0] | [nan, 0.9271920999518004, 0.9586528456396424, 0.8950859286878055, 0.8543684344094141, 0.5001077589222989, nan, 0.6957442272963114, 0.865249216727491, 0.29653296216626596, 0.965861001657221, 0.34918028969307435, 0.09968010236724248, nan, 0.04350240047029622, 0.7427769768744117, 0.0, 0.014277694610778444, 0.9031779514966014, 0.27097575666315316, 0.6516176347536755, 0.5312789152672611, 0.15938575589091872, nan, 0.1655842867535216, 0.5506552903232765, 0.646684350132626, 0.19143576826196473, 0.934270819268475, 0.8973354573921052, 0.9786708622773069, 0.02306245907449793, 0.49554760636273687, 0.39273359736918934, 0.0] |
| 0.0703 | 19.0 | 3800 | 0.6150 | 0.4447 | 0.5236 | 0.8814 | [nan, 0.8568329709764136, 0.8715484530983088, 0.738881035722534, 0.7995760090849725, 0.3974276563277307, nan, 0.6035317423598785, 0.6161278881232266, 0.2509122574224581, 0.8751870220777282, 0.32444767686988785, 0.2041778117437269, nan, 0.00423908435777872, 0.6463022028053799, 0.0, 0.008380410789778891, 0.7680467519686958, 0.2155613589980584, 0.48751604164810447, 0.4471170153318318, 0.14704535043518094, nan, 0.13243579159371116, 0.42829376068561764, 0.45524406183801325, 0.14333333333333334, 0.870892996828412, 0.7602052478005652, 0.9425786141644636, 0.07600972188941106, 0.3934651777947416, 0.3210925618654719, 0.0] | [nan, 0.9332007096859164, 0.9544099398977899, 0.8833637517232736, 0.8699393722702624, 0.47761665253205476, nan, 0.692577234170062, 0.8485091655094935, 0.382829305327091, 0.9693897923513064, 0.35840203064598763, 0.20511836212412027, nan, 0.004245729775629511, 0.7737600228753887, 0.0, 0.008383233532934131, 0.9052968156144728, 0.26163228429453395, 0.6140446037004041, 0.5866425472649559, 0.1699761715647339, nan, 0.19308246809073473, 0.5528600860776425, 0.630394757372445, 0.1624685138539043, 0.9339102716081409, 0.8969543707986479, 0.9794565352174758, 0.09337322255663488, 0.48195666119404607, 0.408794707531632, 0.0] |
| 0.0771 | 19.5 | 3900 | 0.6235 | 0.4430 | 0.5292 | 0.8806 | [nan, 0.8495826698424181, 0.8734917159120744, 0.6934510691748211, 0.8067212097508442, 0.41650363916148225, nan, 0.5981666394296641, 0.6003728932996653, 0.265154784792247, 0.8782424168714691, 0.3278874303522523, 0.12224065331121603, nan, 0.0451882303434435, 0.6294262478549578, 0.0, 0.0319125070106562, 0.7644201364294588, 0.23531196708278462, 0.4813316301850284, 0.433149821306163, 0.21246040126715945, nan, 0.10987650874303209, 0.44139567251875855, 0.46044998180378055, 0.13596004439511652, 0.8717858201507739, 0.7667951466954929, 0.944024667835092, 0.04886116392575024, 0.3771918361071607, 0.3110419288449918, 0.0] | [nan, 0.9368120263215418, 0.9483170420087484, 0.9027251221957638, 0.8440601785119725, 0.5293924356035479, nan, 0.6978172121290125, 0.8560871961984244, 0.42755915475136025, 0.9670344072403289, 0.36383959759675844, 0.12258477287268074, nan, 0.045592605898298444, 0.7538334147474771, 0.0, 0.03194236526946108, 0.9155708133993281, 0.2919289263665111, 0.6268941314288246, 0.5492724158355612, 0.2663489541964522, nan, 0.15616692017723696, 0.5651180601249096, 0.671212357622094, 0.15428211586901763, 0.929445354636433, 0.9028354513578473, 0.977522725527041, 0.059899916496957244, 0.5334935252737216, 0.3771255529054043, 0.0] |
| 0.077 | 20.0 | 4000 | 0.6355 | 0.4442 | 0.5293 | 0.8804 | [nan, 0.8477339108669912, 0.8711808446355435, 0.7247190488882828, 0.7930040600433753, 0.40843686152461833, nan, 0.6007686420819331, 0.611498187268345, 0.27849565286751216, 0.8796031359742671, 0.31663624963040754, 0.050287907869481764, nan, 0.03303850156087409, 0.6381516799745444, 0.0, 0.042615174886552504, 0.7639224337078265, 0.23033175355450236, 0.49038434130253405, 0.4334942937986925, 0.23271466449112788, nan, 0.10037582492672313, 0.4379357154361181, 0.46391191766216244, 0.18806606286627597, 0.8711922949962027, 0.7652555650730284, 0.9439731653728981, 0.038568199960605024, 0.395168629898328, 0.3190930177180806, 0.0] | [nan, 0.9426842428626298, 0.9476463743383388, 0.9158161267076075, 0.852664907299872, 0.49294584904234506, nan, 0.6927357044890539, 0.8426965508097887, 0.4417942553460711, 0.9673148977189633, 0.3616040240324158, 0.050287907869481764, nan, 0.03318201116953526, 0.7646217816591805, 0.0, 0.04270209580838323, 0.9084304182579402, 0.3146815238668875, 0.6266155428785186, 0.544954936206317, 0.3020916070955785, nan, 0.14766219165399114, 0.5748778409397348, 0.665314401622718, 0.22229219143576825, 0.9319113783685224, 0.9052667931075778, 0.9775182072801006, 0.04469815752638756, 0.49569982494862624, 0.39859866366275387, 0.0] |
| 0.072 | 20.5 | 4100 | 0.6345 | 0.4402 | 0.5201 | 0.8797 | [nan, 0.852031426384866, 0.8685017783022543, 0.7489700201508231, 0.7897190451336594, 0.41515227314239544, nan, 0.5913996314963393, 0.5817752300243265, 0.22288443482875453, 0.8784728893385922, 0.3175653221451419, 0.14689769397375463, nan, 0.005900955237505298, 0.6302939436181841, 0.0, 0.01890345071866669, 0.7698190326976668, 0.1980060924951537, 0.48377673416480615, 0.43626046482227276, 0.2353916286779942, nan, 0.0950323315303591, 0.424350566321017, 0.46005233318796335, 0.15153172866520787, 0.8691996952372263, 0.7577445850403978, 0.9438605859081091, 0.036551938870825314, 0.4042492513902752, 0.3106063342287636, 0.0] | [nan, 0.9359809794244869, 0.9490529557640499, 0.8333907757864394, 0.8707448805634974, 0.5165671645968557, nan, 0.6831888732921203, 0.8661005240424537, 0.3454384410983171, 0.9669976617627786, 0.3560500209585022, 0.1475367882277671, nan, 0.005911362226068781, 0.7487936806738709, 0.0, 0.01897455089820359, 0.9105835206095785, 0.23686191838578527, 0.6286247274409722, 0.5745177453695551, 0.3007678051363516, nan, 0.12362938959063555, 0.5548685644018251, 0.6583866437821813, 0.17443324937027707, 0.9318008086474028, 0.8919853915258595, 0.9761421500730701, 0.043019085551570635, 0.49318821828145215, 0.37896535768779505, 0.0] |
| 0.0941 | 21.0 | 4200 | 0.6417 | 0.4367 | 0.5183 | 0.8803 | [nan, 0.8536942066425374, 0.8716728086526887, 0.7217610037714826, 0.8016617251069897, 0.39813302152012653, nan, 0.5877409714828552, 0.581088876278555, 0.24200669914738124, 0.8767757682419954, 0.3182256009777216, 0.023544465770953295, nan, 0.0070070719522481026, 0.620056635094229, 0.0, 0.009102123205741627, 0.7674455370900829, 0.20942919639602103, 0.47637004648109327, 0.4412241523113529, 0.22459668971296878, nan, 0.11327145989974938, 0.4330839791931444, 0.4611622931493691, 0.16533333333333333, 0.8692644363745233, 0.7557335501411377, 0.9430637512481551, 0.041265705025608196, 0.4160379765289015, 0.30650352854711693, 0.0] | [nan, 0.9409169235818866, 0.9524916572463776, 0.8473316675021932, 0.8639649426788963, 0.4799330495215224, nan, 0.675864648650388, 0.8646002693480521, 0.40225230924965205, 0.9690060062524471, 0.35774998835638766, 0.023544465770953295, nan, 0.007054443319507496, 0.7565558242884205, 0.0, 0.009113023952095808, 0.9113326255899776, 0.24220749887065202, 0.6057391210186693, 0.5868746698256679, 0.28382314005824727, nan, 0.15303220686462535, 0.554292556117661, 0.6591980028085505, 0.1952141057934509, 0.9353192381746215, 0.8905385621570336, 0.9757877187019779, 0.04956416217612534, 0.4964609178780729, 0.37674409637492356, 0.0] |
| 0.0782 | 21.5 | 4300 | 0.6454 | 0.4415 | 0.5235 | 0.8799 | [nan, 0.8504200442986705, 0.8706101357486907, 0.7056215362738743, 0.7876513630871091, 0.400397772289305, nan, 0.5833340390934653, 0.6037233510159143, 0.27071739541926937, 0.8760910833510821, 0.3170342062398196, 0.1569128545872732, nan, 0.004047129475505075, 0.6337119513774723, 0.0, 0.010801517445011306, 0.7677695670806137, 0.20474538968947248, 0.4878818003141671, 0.43772872578042077, 0.20220971440483634, nan, 0.09979657891836101, 0.43280810853405477, 0.46237513368375543, 0.19936875328774328, 0.8714873132631829, 0.7615198209000698, 0.943527995475871, 0.036048864573184444, 0.39856165675198435, 0.30890948743440916, 0.0] | [nan, 0.9316621834357431, 0.9566976717892703, 0.8803950213059281, 0.8512847503070141, 0.48234013194170383, nan, 0.6648779094788579, 0.8417894200643365, 0.45242313045678856, 0.9656731914386304, 0.35353500069861676, 0.15713371721049263, nan, 0.004049773016754303, 0.7577770364457365, 0.0, 0.010815868263473053, 0.9122906067750342, 0.24935250715253726, 0.6328941215847061, 0.5631373365136793, 0.25681758009001854, nan, 0.13172409232193638, 0.5687574832536917, 0.6341082852239038, 0.23866498740554157, 0.9312875631390318, 0.9002922437505976, 0.9758052896623012, 0.04189870299950139, 0.5055505420068933, 0.3756809756473744, 0.0] |
| 0.108 | 22.0 | 4400 | 0.6509 | 0.4451 | 0.5279 | 0.8798 | [nan, 0.8508891593574098, 0.8701725295944925, 0.7190526679651407, 0.7949103532074352, 0.4065576095204887, nan, 0.5867366529031396, 0.5966577566191247, 0.2500506175339137, 0.8797915197980181, 0.3177700720192051, 0.2361482613679786, nan, 0.02818368210368731, 0.6274843452218895, 0.0, 0.03015609555957554, 0.7662637429974188, 0.17954612053527821, 0.4826579010566763, 0.43649748274849415, 0.29477050413844996, nan, 0.08216561167274192, 0.4374834552479116, 0.46172688303735454, 0.16170903190914007, 0.8699827093783511, 0.7621501532337551, 0.9427059625661717, 0.02728573842871327, 0.3953837377682215, 0.3046632747267205, 0.0] | [nan, 0.9383975799517177, 0.9503402078908603, 0.8472964187241508, 0.8571062458354638, 0.5071453961749781, nan, 0.67852807619446, 0.8535542080399696, 0.39067442743262054, 0.9640331182906331, 0.3467817055563318, 0.23723608445297506, nan, 0.02828309219765505, 0.7688632599811754, 0.0, 0.030258233532934133, 0.9108955710891435, 0.20758921849119108, 0.6331067545418301, 0.563924952374854, 0.41487953402171035, nan, 0.10524436214536076, 0.5633210005716937, 0.6823529411764706, 0.18828715365239296, 0.9352041499793213, 0.8930033244119151, 0.9773304490183606, 0.031037300028234842, 0.48983940939188675, 0.36991781470852314, 0.0] |
| 0.1053 | 22.5 | 4500 | 0.6446 | 0.4413 | 0.5240 | 0.8804 | [nan, 0.8516018601481288, 0.8725022690889938, 0.7179855129757543, 0.7893428233012898, 0.41184574456128054, nan, 0.601201065740327, 0.6136618833249204, 0.2691148565688135, 0.8756474674530337, 0.31875544620431595, 0.01689059500959693, nan, 0.07335202451587663, 0.6433678450836011, 0.0, 0.01572555822933572, 0.7679359258515325, 0.21602682575898957, 0.484629135346703, 0.4342512170220322, 0.2389092389092389, nan, 0.0960046019235233, 0.43446331722961584, 0.46184034553068337, 0.18514588859416445, 0.868028691002913, 0.7484157395102707, 0.9438116619137188, 0.018002137905716198, 0.3981238599843655, 0.31220194555548186, 0.0] | [nan, 0.9402142710859532, 0.9504479564574014, 0.9033674332623136, 0.8570008171762872, 0.5025405915265101, nan, 0.6883009488108693, 0.8313504385628258, 0.45761103378463874, 0.9679538607452555, 0.36203483768804434, 0.01689059500959693, nan, 0.07348378457820308, 0.7536785293031346, 0.0, 0.015774700598802396, 0.916295790765328, 0.27017015509712394, 0.6223658369715555, 0.5637792754574416, 0.31082870002647606, nan, 0.12582501157330864, 0.5505991996289384, 0.6573568419410204, 0.21977329974811083, 0.9349404837212665, 0.8744294145006002, 0.9761270892499359, 0.020031719141420514, 0.4983636502016896, 0.3823433824866008, 0.0] |
| 0.0729 | 23.0 | 4600 | 0.6504 | 0.4442 | 0.5287 | 0.8818 | [nan, 0.8529224302833057, 0.8749069439674217, 0.7124165495842666, 0.7981314227450551, 0.4219836185064375, nan, 0.5980489056111389, 0.5937798370563517, 0.26749899315344344, 0.8771770765363269, 0.3184638741428324, 0.11027440970006382, nan, 0.03436026609830136, 0.6326238553766561, 0.0, 0.01960015680125441, 0.7678647757023203, 0.17668130489335007, 0.4842225795898055, 0.43562883851712997, 0.2722820763956905, nan, 0.09899394141515203, 0.4424720175765528, 0.464710633745934, 0.19435736677115986, 0.8723529506897612, 0.769263303052274, 0.944777547510271, 0.02896463969997321, 0.3889390399320306, 0.3170199469883882, 0.0] | [nan, 0.938067698740673, 0.9513083929328163, 0.89565186740193, 0.8497358241166599, 0.5272809205131004, nan, 0.6901985705494577, 0.8548939703717142, 0.4202201695558649, 0.9681849489707388, 0.3639211028829584, 0.11055662188099807, nan, 0.03474966524053692, 0.7663195644144735, 0.0, 0.01964820359281437, 0.9151684567058683, 0.21203884957084776, 0.6206844615513346, 0.5654729697280164, 0.3680169446650781, nan, 0.14133985847496858, 0.5609004713775659, 0.6642845997815572, 0.23425692695214106, 0.9318988861163767, 0.9042042995479876, 0.9770302366105517, 0.03377668042364278, 0.47782501386276405, 0.39051922155325797, 0.0] |
| 0.1036 | 23.5 | 4700 | 0.6603 | 0.4386 | 0.5244 | 0.8806 | [nan, 0.8533271314995979, 0.8716848781862123, 0.6949570078042672, 0.799142768552631, 0.4136716800017223, nan, 0.594126133711721, 0.5746659186432572, 0.26455413522254734, 0.8779055956605233, 0.32176571440375124, 0.06409108353588333, nan, 0.002665366487892085, 0.6246631186697931, 0.0, 0.021172516803584764, 0.7658402717281243, 0.16484519731844172, 0.47744012041130535, 0.4401148078370365, 0.27666273120818574, nan, 0.08227696474141971, 0.4420921774646655, 0.4622457926138186, 0.19705727798213346, 0.8712029666095307, 0.7633482920046387, 0.9448146302077413, 0.021216823630136987, 0.4033732592225325, 0.30471747610466043, 0.0] | [nan, 0.9426998267158271, 0.9496961639659782, 0.8910832341145507, 0.8452465634632607, 0.5163007061708077, nan, 0.683398826354338, 0.8720178076743261, 0.4347083386055928, 0.9687144921305477, 0.3627450980392157, 0.06410748560460652, nan, 0.0026780757046278455, 0.7649375096803402, 0.0, 0.02122005988023952, 0.9159079106405077, 0.18624454148471617, 0.6270211205559958, 0.5579602029871772, 0.37225311093460417, nan, 0.10447721711527015, 0.5669625810348733, 0.674551412076767, 0.23614609571788414, 0.9341646351252362, 0.8888155494470299, 0.9763585238987652, 0.023819393131123807, 0.5042023202461592, 0.36790311830385425, 0.0] |
| 0.0602 | 24.0 | 4800 | 0.6441 | 0.4400 | 0.5251 | 0.8813 | [nan, 0.8538669180943913, 0.8714206730327352, 0.7265065424818405, 0.7944895274917341, 0.41159858818221384, nan, 0.598723215969898, 0.5793241203112298, 0.21300943585625376, 0.8781405524300877, 0.31494684923878563, 0.003198976327575176, nan, 0.023460790667530783, 0.6340086649470337, 0.0, 0.022494353077339505, 0.7704172404170122, 0.20057405210721377, 0.48738799125097015, 0.44338466024710455, 0.2651285164946367, nan, 0.10598619462866644, 0.4315167012945501, 0.4636160148661205, 0.1994858611825193, 0.8724407698169359, 0.7651078907324416, 0.9446618029728042, 0.036326854402481006, 0.4026748646802249, 0.3247269269038156, 0.0] | [nan, 0.9387045680686849, 0.9508335829060749, 0.899392154405314, 0.857332521574329, 0.5045463069477915, nan, 0.6868352994646599, 0.8702454137562888, 0.33563203846640516, 0.9688737225332659, 0.368776489218015, 0.003198976327575176, nan, 0.023645448904275123, 0.7540717000464656, 0.0, 0.022548652694610778, 0.9077851769128602, 0.2457009486523114, 0.6256065207440185, 0.5598363936158292, 0.34683611331744774, nan, 0.1450036373255737, 0.5708306816098029, 0.685130285535965, 0.24433249370277077, 0.9334669295532664, 0.9022854055439048, 0.9765041118557294, 0.04250244802085774, 0.49987496330444803, 0.4067359439465465, 0.0] |
| 0.0858 | 24.5 | 4900 | 0.6542 | 0.4426 | 0.5257 | 0.8811 | [nan, 0.8535024017455235, 0.870729831123166, 0.7368004904900545, 0.8040427199254746, 0.41337382978307485, nan, 0.5953554904107654, 0.5879739874719074, 0.23943714402710048, 0.8779198400911906, 0.31200587186746354, 0.05719769673704415, nan, 0.02895077720207254, 0.6317807706696595, 0.0, 0.014652562329458378, 0.7672359194184177, 0.16818876251909692, 0.48551923531860225, 0.4482206909152273, 0.30433940333204185, nan, 0.08026811325541007, 0.43383295322348486, 0.4685364181283316, 0.22444889779559118, 0.8708437173768183, 0.7573102155207374, 0.944666987959307, 0.02360927453678175, 0.39572948044677847, 0.3253661169601403, 0.0] | [nan, 0.9412632161073603, 0.952336451554961, 0.8613156410577767, 0.8516960470920234, 0.5090128443037514, nan, 0.6779754411226455, 0.8580270604079298, 0.40693407566746803, 0.9697739867332494, 0.3464673280238461, 0.05719769673704415, nan, 0.02919755707240602, 0.7645681674669081, 0.0, 0.014670658682634731, 0.9124748957916845, 0.1864929980424635, 0.6384334070001526, 0.5655177933949125, 0.41593857558909186, nan, 0.10881555452681702, 0.5695988436687629, 0.6610391636760805, 0.28211586901763225, 0.9317824689580824, 0.8928000163389136, 0.9751867918589222, 0.02649269198190567, 0.4842181944701162, 0.4042585421475039, 0.0] |
| 0.0676 | 25.0 | 5000 | 0.6579 | 0.4445 | 0.5283 | 0.8811 | [nan, 0.8544816950603682, 0.8718064586821658, 0.7161099928628136, 0.8057268786519, 0.4145000071575813, nan, 0.6012540624657657, 0.5906514962744632, 0.2477245145631068, 0.8796745402342583, 0.3070911543332544, 0.052968270214943707, nan, 0.06340767634854771, 0.638930476984757, 0.0, 0.03085336717933403, 0.7651668300886822, 0.19561457345310496, 0.47756607528405587, 0.44312459012336514, 0.30279265493496554, nan, 0.09038079519286837, 0.4427811502384508, 0.4671437011750804, 0.2126144455747711, 0.8705506460028716, 0.7605346316818448, 0.944309967287374, 0.023320900429803195, 0.3963383669651209, 0.31310675521271614, 0.0] | [nan, 0.9403422517564588, 0.9532261383337582, 0.8743478976062163, 0.8544151063850179, 0.51868147862035, nan, 0.6887369432925627, 0.8501838684241744, 0.4132607870428951, 0.9675794651573257, 0.34727073727353175, 0.05297504798464491, nan, 0.06388190339331787, 0.7575923653390204, 0.0, 0.03093188622754491, 0.9066224556856148, 0.23756964312603523, 0.6286227586172951, 0.5732946996013896, 0.4191157002912364, nan, 0.12095760862376827, 0.5622380187040892, 0.6661881728818848, 0.26322418136020154, 0.9333389504169223, 0.8815340568872698, 0.9774815592771408, 0.026630862484305632, 0.4836963021756385, 0.38378520554585466, 0.0] |
| 0.0742 | 25.5 | 5100 | 0.6517 | 0.4445 | 0.5279 | 0.8811 | [nan, 0.8531362909472605, 0.8715212151990632, 0.7191531659747462, 0.8033166209158342, 0.4064664667628253, nan, 0.5975178424151787, 0.598425971470733, 0.24801456630380042, 0.8743928692744429, 0.30824439480433524, 0.04653541293786755, nan, 0.04367890802729932, 0.638407774133227, 0.0, 0.020997032419418055, 0.76545134139714, 0.23981673152824928, 0.4779287361816923, 0.43973255930540556, 0.29756191207525434, nan, 0.08604929835507096, 0.4464954521134296, 0.4657531266992931, 0.21084642676127724, 0.8710381581741707, 0.7602297296922776, 0.9447522567410569, 0.028645874919489202, 0.39981816528775344, 0.31633542763331385, 0.0] | [nan, 0.93975737457584, 0.9539457554914609, 0.8826587761624264, 0.8516847958121903, 0.4918626619532634, nan, 0.692934395701191, 0.8489348191669749, 0.40503606225483996, 0.971100906755901, 0.34704950863956036, 0.04657709532949456, nan, 0.043894313988046635, 0.7443615741126851, 0.0, 0.021051646706586827, 0.914970150694186, 0.32708929378105706, 0.6062155763485212, 0.5546464533273568, 0.4103786073603389, nan, 0.11492626149064215, 0.5688933953207417, 0.6682165704478078, 0.2619647355163728, 0.9326255630816207, 0.8912835609176899, 0.9756983578180481, 0.03232889385501709, 0.4781403237906777, 0.38820707168077734, 0.0] |
| 0.0784 | 26.0 | 5200 | 0.6579 | 0.4418 | 0.5235 | 0.8814 | [nan, 0.8579177507140824, 0.8708216033378015, 0.71264525058744, 0.8003910481799914, 0.4154387921255276, nan, 0.5986674920065644, 0.5952951745755843, 0.2592836287370975, 0.8783345682729028, 0.3102843176168293, 0.01381957773512476, nan, 0.055405537300322084, 0.6409802019914796, 0.0, 0.027613375403745262, 0.7653070900946993, 0.17864283492754635, 0.47835401765137675, 0.44206535963775107, 0.28829007331087775, nan, 0.07637448132780084, 0.443390030810788, 0.4671697949063703, 0.2055439330543933, 0.8716533661380094, 0.7606125385635246, 0.9442876895905309, 0.029555260087374163, 0.3872656356918024, 0.3207798396117699, 0.0] | [nan, 0.9349416881174498, 0.954891734180645, 0.8671101485148515, 0.8549705862375179, 0.518871806067527, nan, 0.6869640063227244, 0.8507700144443127, 0.3988991522206757, 0.9684372679165845, 0.3458851474081319, 0.01381957773512476, nan, 0.05561906006074659, 0.7519926608127911, 0.0, 0.027675898203592814, 0.9198140565183622, 0.2119033278120765, 0.6068633193382784, 0.5759120815790737, 0.3852263701350278, nan, 0.10127637061040937, 0.5665828901808925, 0.6703073802465284, 0.24748110831234257, 0.9297855691629553, 0.8862222110820538, 0.9791693755230498, 0.03271637200305177, 0.4922749067661161, 0.39214695302989405, 0.0] |
| 0.0933 | 26.5 | 5300 | 0.6537 | 0.4465 | 0.5292 | 0.8815 | [nan, 0.8578274144867754, 0.8706944599446437, 0.726574676698918, 0.7965264250115344, 0.4138352304149668, nan, 0.5996906228667696, 0.5952328801795093, 0.26587034766443857, 0.8763896225796494, 0.3109583960106328, 0.051567498400511835, nan, 0.0901204195260909, 0.6420171035039479, 0.0, 0.02349425716686899, 0.7667543264009693, 0.20011613966731606, 0.4812815370031593, 0.44325119297093946, 0.29834040910845233, nan, 0.08099147613022198, 0.4410559246608335, 0.46692995399704884, 0.21754746023601848, 0.8709348052283986, 0.7589810319047727, 0.9447214407777204, 0.028561689061149552, 0.3990195564410423, 0.32229599634879746, 0.0] | [nan, 0.9364460126367121, 0.9542721649994094, 0.8571915340268204, 0.8582722117974253, 0.5089490286302861, nan, 0.6932505319213118, 0.8533309143180121, 0.4252815386562065, 0.9713197464888675, 0.35141586325741697, 0.051567498400511835, nan, 0.09092393611809661, 0.7580510645395733, 0.0, 0.023540419161676648, 0.9143846539180948, 0.24129649149224514, 0.6127491177208897, 0.5620839803416203, 0.40931956579295736, nan, 0.10921235368031215, 0.5629779844024723, 0.6714932126696832, 0.26700251889168763, 0.9333994182331596, 0.8861479432928295, 0.9774820613045785, 0.0319624416529998, 0.4964826633903428, 0.39579351220936315, 0.0] |
| 0.0567 | 27.0 | 5400 | 0.6560 | 0.4478 | 0.5328 | 0.8813 | [nan, 0.8553693294174469, 0.8697873061479447, 0.7264461815995189, 0.7905301697611444, 0.411466496989507, nan, 0.5940628963809516, 0.6009298683502178, 0.25792349726775954, 0.8776739545125398, 0.30596110207338173, 0.044145873320537425, nan, 0.09513453860884806, 0.6415323393109007, 0.0, 0.023673941860682213, 0.76967565546345, 0.2071366352924097, 0.4822657218219634, 0.447023673281542, 0.31363636363636366, nan, 0.09490007962556564, 0.4403959288446748, 0.4699452359653525, 0.21672616012238655, 0.8728840158815016, 0.7641845653446224, 0.945197575330266, 0.03583718336785088, 0.40098258860056096, 0.32540419914161195, 0.0] | [nan, 0.9383584134534163, 0.9535672923823912, 0.8871529953628274, 0.8548576567251192, 0.4991908284561925, nan, 0.6777703145676053, 0.8568059228659749, 0.41806908768821965, 0.9689333318635143, 0.3438125844161893, 0.044145873320537425, nan, 0.09572487671053921, 0.7586825205818927, 0.0, 0.02372754491017964, 0.9104952824769179, 0.2578150880891432, 0.6178936539890829, 0.58465589831431, 0.4384432088959492, nan, 0.12926393757026652, 0.5707271295587173, 0.6721485411140583, 0.267632241813602, 0.9302448587737605, 0.9012359088474289, 0.9745226095587028, 0.040541027628093064, 0.4942863666510824, 0.4056260018398048, 0.0] |
| 0.082 | 27.5 | 5500 | 0.6528 | 0.4479 | 0.5339 | 0.8809 | [nan, 0.8542161931700274, 0.8715722793480463, 0.704540847354551, 0.7884145964413252, 0.4138195616938685, nan, 0.6003419584716937, 0.6060144386357978, 0.2726078219199258, 0.8771736665482311, 0.31391856680932995, 0.051567498400511835, nan, 0.09418363376993905, 0.642823049444062, 0.0, 0.03630867975222032, 0.7689104360029944, 0.18934845107328915, 0.48370432775871786, 0.44287615854860973, 0.3116234390009606, nan, 0.08987411163537404, 0.4422092984166471, 0.46730410650015175, 0.22567498726439122, 0.871913766879578, 0.7613844283291559, 0.9444714260232966, 0.032384119507576754, 0.39892025842994955, 0.32534307431907133, 0.0] | [nan, 0.9366839077408309, 0.9496370664142465, 0.8983856059656599, 0.8500996154979296, 0.5210930393334061, nan, 0.693125042734699, 0.8493395390380227, 0.44628622042262434, 0.9696588509035917, 0.34929672581621724, 0.051567498400511835, nan, 0.0948757307554133, 0.7621734002120739, 0.0, 0.03641467065868263, 0.9103413253183696, 0.22958891733172715, 0.6177607583908804, 0.5752349240398931, 0.42944135557320623, nan, 0.12143376760796244, 0.5675687920006041, 0.6725854267436417, 0.27896725440806047, 0.9322590350878126, 0.8981714341945611, 0.9782371105710411, 0.03708976877466794, 0.4900786100268557, 0.40340060922876925, 0.0] |
| 0.0841 | 28.0 | 5600 | 0.6685 | 0.4475 | 0.5320 | 0.8810 | [nan, 0.8535934210551042, 0.8700114102162729, 0.725858844052281, 0.793245423107309, 0.413649594070452, nan, 0.5995419153081175, 0.5956567456260771, 0.26373951288021585, 0.8789171561451945, 0.31501675676236524, 0.05323096609085093, nan, 0.08127608632348658, 0.6398267703905686, 0.0, 0.03446088400470176, 0.7659542564370029, 0.1948335433011937, 0.4817450150277308, 0.43519402471433216, 0.32843866171003716, nan, 0.08842019083634663, 0.4459816496948121, 0.46913824570662094, 0.21838496698831894, 0.8716737795896268, 0.7611766747382033, 0.9452684611391005, 0.03253476764781615, 0.4003894312208581, 0.31499493055372985, 0.0] | [nan, 0.9396862129099123, 0.9526982002049551, 0.8686082215816519, 0.8506234250812696, 0.5095390437165352, nan, 0.6904415047440543, 0.8538263472636052, 0.4514741237504745, 0.9683307060316885, 0.3535117134739882, 0.05323096609085093, nan, 0.08204056304908716, 0.7604160461320338, 0.0, 0.034562125748502995, 0.9179003920162873, 0.23520554133413643, 0.6171041556945763, 0.546589399202779, 0.46783161239078636, nan, 0.11999206401693009, 0.56699278371644, 0.6657824933687002, 0.2707808564231738, 0.9311342008094979, 0.8918090055264518, 0.9762134379692388, 0.0375733655330678, 0.4985702325682537, 0.37949140965401756, 0.0] |
| 0.0941 | 28.5 | 5700 | 0.6622 | 0.4491 | 0.5311 | 0.8812 | [nan, 0.8527785426418684, 0.8702726958294947, 0.7236832595240286, 0.7976914083037109, 0.41272857993018014, nan, 0.5981049182117808, 0.5960156330049622, 0.254572193521626, 0.8769063309535273, 0.3145665301296398, 0.13231162196679438, nan, 0.09486371462915436, 0.6453841944013953, 0.0, 0.032893632106274606, 0.7664693911498265, 0.1773754226308971, 0.48459536975213174, 0.43947748040615014, 0.3049080917187796, nan, 0.08639704102479691, 0.44400650025163907, 0.4691795039877636, 0.2171691364333163, 0.8719429897092373, 0.7628717611798447, 0.944654086220474, 0.027986890561791163, 0.3984553210597216, 0.32250897758583996, 0.0] | [nan, 0.9396553210239281, 0.9536194653725059, 0.8627353835067051, 0.8472247051435434, 0.5074152133206822, nan, 0.6874096538187727, 0.8523819159996929, 0.42711628495508036, 0.96965762605434, 0.3469447161287318, 0.1325655790147153, nan, 0.09536562265260133, 0.7516769327916314, 0.0, 0.032990269461077845, 0.9179909280169858, 0.21131606685740098, 0.6165765109491207, 0.5540285270622889, 0.4259994704792163, nan, 0.11802129488790424, 0.5652690735327429, 0.6700577313153379, 0.267632241813602, 0.9323039540370175, 0.8882362606910803, 0.978005173894774, 0.03152390049320862, 0.48240244419557915, 0.398609680457858, 0.0] |
| 0.084 | 29.0 | 5800 | 0.6638 | 0.4504 | 0.5356 | 0.8814 | [nan, 0.8541570899530211, 0.870961955863842, 0.7210895127135939, 0.7973759638946835, 0.4169777773388737, nan, 0.5986244892027949, 0.5965511370546029, 0.26509875556570384, 0.8796979087417194, 0.31685194805194805, 0.1068370607028754, nan, 0.10446940030421696, 0.6431023450909498, 0.0, 0.03144818324255764, 0.767419578792302, 0.17821932448413025, 0.4823647516914524, 0.4404489260257254, 0.33862529316254736, nan, 0.0921704754578826, 0.44465093292171043, 0.47094082775277196, 0.22085889570552147, 0.872046926709636, 0.7623472218800441, 0.9446327639785433, 0.028401023191997405, 0.39498943137322845, 0.32154917946571837, 0.0] | [nan, 0.9394396294628604, 0.9521894240095922, 0.8648933920290763, 0.8534508300319161, 0.5200031348050124, nan, 0.685954461904781, 0.8557452776866771, 0.4407187144122485, 0.9677901392286145, 0.35509524474873083, 0.10697376839411388, nan, 0.10542473627486201, 0.7544172137299989, 0.0, 0.03154940119760479, 0.9169495342221486, 0.21218190031621745, 0.6120669203167838, 0.5754110170169849, 0.4969552554937781, nan, 0.12520335956616627, 0.5667619489358948, 0.6693399906381651, 0.27204030226700254, 0.9308347854468983, 0.8905130326044878, 0.9780518624464901, 0.03208258991595629, 0.495754188729301, 0.3957287885381264, 0.0] |
| 0.0657 | 29.5 | 5900 | 0.6606 | 0.4463 | 0.5300 | 0.8812 | [nan, 0.8521621933006027, 0.8714772059308231, 0.7113873149332846, 0.7975335834300151, 0.416269475189028, nan, 0.5991724655176409, 0.5949552521161262, 0.2560740303209293, 0.8789088003737188, 0.3131168467262368, 0.035572616762635956, nan, 0.09531219607080335, 0.639645156066556, 0.0, 0.03244402985074627, 0.767625063425157, 0.1811454948254177, 0.482023423781372, 0.4396083605305187, 0.32138188608776846, nan, 0.0924747395629357, 0.4442256297244303, 0.4698375614379229, 0.22010309278350515, 0.8719945971737496, 0.7630333969775723, 0.9448005798209502, 0.02608180836728844, 0.39741849610760355, 0.3192197607254233, 0.0] | [nan, 0.9394630741977591, 0.9517531169332215, 0.8764804486777792, 0.8464837875308316, 0.5216024451479097, nan, 0.6863413868968374, 0.8558639024764669, 0.4114260407440213, 0.9678783283747353, 0.35129942713427414, 0.035572616762635956, nan, 0.09601881184885203, 0.7529755876711186, 0.0, 0.03254116766467066, 0.9153212649303976, 0.21361993675651259, 0.6234073446967273, 0.5574047096867146, 0.45565263436589887, nan, 0.12492560015871966, 0.5601152016568328, 0.6652207832735216, 0.2688916876574307, 0.9331143556708977, 0.8937385755252357, 0.9777195202826615, 0.029244087203609253, 0.493449164428691, 0.39103150252560026, 0.0] |
| 0.0998 | 30.0 | 6000 | 0.6675 | 0.4470 | 0.5318 | 0.8813 | [nan, 0.8533604816441539, 0.8708564376734238, 0.7156066545682345, 0.7983500150170627, 0.41321512443014663, nan, 0.5985238978790051, 0.594578879531733, 0.2628567068518236, 0.8786538788390237, 0.31440437475402366, 0.03608445297504798, nan, 0.09407417012448133, 0.6379131555367624, 0.0, 0.030675647939096523, 0.7682604802535551, 0.18266911680257264, 0.4825862889842458, 0.4408940826749661, 0.33210263983754845, nan, 0.09290563675138425, 0.4449054225103382, 0.47077278372077824, 0.21916411824668705, 0.8724052658393265, 0.7617232855126097, 0.9444534550949257, 0.025704847176713768, 0.3993842932680365, 0.3205363901805991, 0.0] | [nan, 0.9380875578102342, 0.9543011168303082, 0.8679443695951874, 0.8440564280853614, 0.5113012519627518, nan, 0.6873750638506678, 0.8562476885610814, 0.43591041376692397, 0.9688079889567591, 0.35346513902473103, 0.03608445297504798, nan, 0.0947777523759757, 0.7549116557254, 0.0, 0.030763473053892217, 0.9130259245472051, 0.21554735732570396, 0.6246260465528358, 0.5660428706356957, 0.4763039449298385, nan, 0.12561338535811123, 0.5679549548577777, 0.6743017631455765, 0.2707808564231738, 0.93120742667048, 0.8943661383441811, 0.9793189796995165, 0.028724445966322443, 0.4894588629271634, 0.39592709085000083, 0.0] |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
gaurishhs/API | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/ianflynnbkc/1668480615006/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1107777212835614720/g_KwstYD_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ian Flynn</div>
<div style="text-align: center; font-size: 14px;">@ianflynnbkc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ian Flynn.
| Data | Ian Flynn |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 964 |
| Short tweets | 315 |
| Tweets kept | 1964 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gnis1yl2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ianflynnbkc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2692e7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2692e7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ianflynnbkc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Arina/Erine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8333333333333334
- name: Recall
type: recall
value: 0.9322033898305084
- name: F1
type: f1
value: 0.8800000000000001
- name: Accuracy
type: accuracy
value: 0.9725190839694656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1193
- Precision: 0.8333
- Recall: 0.9322
- F1: 0.8800
- Accuracy: 0.9725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 18 | 0.1216 | 0.8594 | 0.9322 | 0.8943 | 0.9740 |
| No log | 2.0 | 36 | 0.1200 | 0.8615 | 0.9492 | 0.9032 | 0.9740 |
| No log | 3.0 | 54 | 0.1193 | 0.8333 | 0.9322 | 0.8800 | 0.9725 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ArjunKadya/HuggingFace | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/palestinepound/1660463113168/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1396083058844045319/d_xNzMbk_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Palestine Pound</div>
<div style="text-align: center; font-size: 14px;">@palestinepound</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Palestine Pound.
| Data | Palestine Pound |
| --- | --- |
| Tweets downloaded | 145 |
| Retweets | 4 |
| Short tweets | 11 |
| Tweets kept | 130 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/152jutl1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @palestinepound's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sd0ks1o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sd0ks1o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/palestinepound')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Aron/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: benchmark-finetuned-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# benchmark-finetuned-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4592
- Accuracy: 0.8228
- F1: 0.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8561 | 1.0 | 48 | 0.6834 | 0.7288 | 0.7016 |
| 0.5498 | 2.0 | 96 | 0.4948 | 0.8042 | 0.8036 |
| 0.4184 | 3.0 | 144 | 0.4592 | 0.8228 | 0.8214 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ArpanZS/debug_squad | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eoir_privacy
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: eoir_privacy
type: eoir_privacy
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9175035868005739
- name: F1
type: f1
value: 0.8092868988391376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the eoir_privacy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.9175
- F1: 0.8093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.2343 | 0.9125 | 0.7953 |
| No log | 2.0 | 126 | 0.2269 | 0.9110 | 0.8006 |
| No log | 3.0 | 189 | 0.2159 | 0.9175 | 0.8093 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ArtemisZealot/DialoGTP-small-Qkarin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: pegasus-newsroom-cnn-adam8bit-bs4x64acc_2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 44.2848
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-cnn-adam8bit-bs4x64acc_2
This model is a fine-tuned version of [oMateos2020/pegasus-newsroom-cnn-adam8bit-bs16x64acc](https://huggingface.co/oMateos2020/pegasus-newsroom-cnn-adam8bit-bs16x64acc) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8608
- Rouge1: 44.2848
- Rouge2: 21.5452
- Rougel: 31.3765
- Rougelsum: 41.2302
- Gen Len: 71.7744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.4e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.4
- num_epochs: 1
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9307 | 1.0 | 1121 | 2.8608 | 44.2848 | 21.5452 | 31.3765 | 41.2302 | 71.7744 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atampy26/GPT-Glacier | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-08-14T11:56:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean
model-index:
- name: wav2vec2-large-xlsr-korean-demo3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo3
This model is a fine-tuned version of [NX2411/wav2vec2-large-xlsr-korean-demo-no-LM](https://huggingface.co/NX2411/wav2vec2-large-xlsr-korean-demo-no-LM) on the zeroth_korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8265
- Wer: 0.5090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6157 | 2.6 | 400 | 0.6686 | 0.6386 |
| 0.4643 | 5.19 | 800 | 0.7036 | 0.6086 |
| 0.3038 | 7.79 | 1200 | 0.6960 | 0.5817 |
| 0.2229 | 10.39 | 1600 | 0.7358 | 0.5571 |
| 0.178 | 12.99 | 2000 | 0.8221 | 0.5636 |
| 0.153 | 15.58 | 2400 | 0.8575 | 0.5691 |
| 0.129 | 18.18 | 2800 | 0.7809 | 0.5297 |
| 0.1141 | 20.78 | 3200 | 0.8077 | 0.5441 |
| 0.0994 | 23.38 | 3600 | 0.8087 | 0.5209 |
| 0.0917 | 25.97 | 4000 | 0.8176 | 0.5149 |
| 0.0823 | 28.57 | 4400 | 0.8265 | 0.5090 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ateeb/SquadQA | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: es_financial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es_financial
This model is a fine-tuned version of [huranokuma/es2](https://huggingface.co/huranokuma/es2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4057
- Accuracy: 0.9057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Augustvember/WokkaBot7 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- eoir_privacy
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy-longer-finetuned-eoir_privacy-longer20
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: eoir_privacy
type: eoir_privacy
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9469153515064562
- name: F1
type: f1
value: 0.8782894736842105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy-longer-finetuned-eoir_privacy-longer20
This model was trained from scratch on the eoir_privacy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1994
- Accuracy: 0.9469
- F1: 0.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.2134 | 0.9412 | 0.8581 |
| No log | 2.0 | 126 | 0.2083 | 0.9405 | 0.8610 |
| No log | 3.0 | 189 | 0.2108 | 0.9448 | 0.8702 |
| No log | 4.0 | 252 | 0.2281 | 0.9397 | 0.8537 |
| No log | 5.0 | 315 | 0.2034 | 0.9469 | 0.8783 |
| No log | 6.0 | 378 | 0.2133 | 0.9419 | 0.8629 |
| No log | 7.0 | 441 | 0.1971 | 0.9440 | 0.8746 |
| 0.0525 | 8.0 | 504 | 0.2013 | 0.9455 | 0.8778 |
| 0.0525 | 9.0 | 567 | 0.1987 | 0.9469 | 0.8787 |
| 0.0525 | 10.0 | 630 | 0.1994 | 0.9469 | 0.8783 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Augustvember/WokkaBot8 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
# Fake News Recognition
## Overview
This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically).
LABEL_0: Fake news
LABEL_1: Real news
## Qucik Tutorial
### Download The Model
```python
from transformers import pipeline
MODEL = "jy46604790/Fake-News-Bert-Detect"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```python
text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention."
```
### Result
```python
result = clf(text)
result
```
output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}] |
Augustvember/WokkaBot9 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- eoir_privacy
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy-longer-finetuned-eoir_privacy-longer30
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: eoir_privacy
type: eoir_privacy
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9490674318507891
- name: F1
type: f1
value: 0.88379705400982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-eoir_privacy-longer-finetuned-eoir_privacy-longer30
This model was trained from scratch on the eoir_privacy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2076
- Accuracy: 0.9491
- F1: 0.8838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.2446 | 0.9426 | 0.8671 |
| No log | 2.0 | 126 | 0.2306 | 0.9412 | 0.8629 |
| No log | 3.0 | 189 | 0.2637 | 0.9448 | 0.8679 |
| No log | 4.0 | 252 | 0.2375 | 0.9455 | 0.8758 |
| No log | 5.0 | 315 | 0.2423 | 0.9440 | 0.8687 |
| No log | 6.0 | 378 | 0.2571 | 0.9455 | 0.8676 |
| No log | 7.0 | 441 | 0.2040 | 0.9469 | 0.8799 |
| 0.0355 | 8.0 | 504 | 0.2096 | 0.9462 | 0.8784 |
| 0.0355 | 9.0 | 567 | 0.2094 | 0.9448 | 0.8736 |
| 0.0355 | 10.0 | 630 | 0.2076 | 0.9491 | 0.8838 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Augustvember/WokkaBot99 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-14T16:52:15Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: TGL-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TGL-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an abstract-summary dataset,
13000 pieces of data for training. The data was acquired by openreview.net.
It achieves the following results on the evaluation set:
- Loss: 2.4435
- Rouge1: 36.4998
- Rouge2: 17.8322
- Rougel: 31.8632
- Rougelsum: 31.8341
## Model description
Here is the paper https://arxiv.org/abs/1910.10683
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.9096 | 1.0 | 1240 | 2.5721 | 36.234 | 17.8214 | 31.5514 | 31.5549 |
| 2.7259 | 2.0 | 2480 | 2.5258 | 36.2572 | 17.9912 | 31.6249 | 31.6441 |
| 2.6434 | 3.0 | 3720 | 2.4957 | 36.4623 | 17.9657 | 31.7693 | 31.7542 |
| 2.5896 | 4.0 | 4960 | 2.4663 | 36.3692 | 17.8372 | 31.5909 | 31.6089 |
| 2.5491 | 5.0 | 6200 | 2.4511 | 36.4775 | 17.8094 | 31.8102 | 31.8003 |
| 2.5183 | 6.0 | 7440 | 2.4440 | 36.5892 | 17.906 | 31.9058 | 31.8985 |
| 2.4997 | 7.0 | 8680 | 2.4438 | 36.3747 | 17.8309 | 31.7314 | 31.7178 |
| 2.4863 | 8.0 | 9920 | 2.4435 | 36.4998 | 17.8322 | 31.8632 | 31.8341 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Augustvember/wokka4 | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: trissondon/test_pyramids_RND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Axon/resnet34-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bearbearchu/mt5-small-finetuned-wikipedia-summarization-jp
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bearbearchu/mt5-small-finetuned-wikipedia-summarization-jp
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2757
- Validation Loss: 0.2210
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 7656, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1713 | 0.3484 | 0 |
| 0.6239 | 0.3156 | 1 |
| 0.4820 | 0.2693 | 2 |
| 0.3973 | 0.2595 | 3 |
| 0.3377 | 0.2480 | 4 |
| 0.3093 | 0.2321 | 5 |
| 0.2843 | 0.2236 | 6 |
| 0.2757 | 0.2210 | 7 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ayham/albert_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cvbn
model-index:
- name: wav2vec2-base-cvbn-37k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cvbn-37k
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2288
- eval_wer: 0.3332
- eval_runtime: 329.8903
- eval_samples_per_second: 9.094
- eval_steps_per_second: 0.57
- epoch: 3.59
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ayham/distilbert_bert_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | Access to model bintualkassoum/w2w is restricted and you are not in the authorized list. Visit https://huggingface.co/bintualkassoum/w2w to ask for access. |
AyushPJ/ai-club-inductions-21-nlp-ALBERT | [
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3144
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Azuris/DialoGPT-medium-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cvbn
model-index:
- name: wav2vec2-base-cvbn-37knew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cvbn-37knew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2208
- eval_wer: 0.2889
- eval_runtime: 336.8019
- eval_samples_per_second: 8.907
- eval_steps_per_second: 0.558
- epoch: 4.11
- step: 9600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Badr/model1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
BalajiSathesh/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mnli
split: train
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8243504839531329
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5486
- Accuracy: 0.8244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5142 | 1.0 | 24544 | 0.4922 | 0.8075 |
| 0.4089 | 2.0 | 49088 | 0.4865 | 0.8194 |
| 0.2936 | 3.0 | 73632 | 0.5486 | 0.8244 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BatuhanYilmaz/code-search-net-tokenizer1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9277247283503457
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9379161118508655
- name: Accuracy
type: accuracy
value: 0.9864602342968152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9277
- Recall: 0.9483
- F1: 0.9379
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0885 | 1.0 | 1756 | 0.0702 | 0.9124 | 0.9290 | 0.9206 | 0.9810 |
| 0.036 | 2.0 | 3512 | 0.0600 | 0.9303 | 0.9502 | 0.9401 | 0.9865 |
| 0.0192 | 3.0 | 5268 | 0.0604 | 0.9277 | 0.9483 | 0.9379 | 0.9865 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BatuhanYilmaz/dummy | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9266725742135579
- name: Recall
type: recall
value: 0.9358988701197002
- name: F1
type: f1
value: 0.9312628708187232
- name: Accuracy
type: accuracy
value: 0.9835734824534926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0625
- Precision: 0.9267
- Recall: 0.9359
- F1: 0.9313
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2395 | 1.0 | 878 | 0.0709 | 0.9148 | 0.9186 | 0.9167 | 0.9809 |
| 0.0538 | 2.0 | 1756 | 0.0628 | 0.9228 | 0.9332 | 0.9280 | 0.9828 |
| 0.03 | 3.0 | 2634 | 0.0625 | 0.9267 | 0.9359 | 0.9313 | 0.9836 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BatuhanYilmaz/mt5-small-finetuned-amazonbooks-en-es | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-15T15:58:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9261759822910902
- name: Recall
type: recall
value: 0.9361226087929299
- name: F1
type: f1
value: 0.9311227328363192
- name: Accuracy
type: accuracy
value: 0.9837164598789457
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9262
- Recall: 0.9361
- F1: 0.9311
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2401 | 1.0 | 878 | 0.0684 | 0.9147 | 0.9172 | 0.9159 | 0.9808 |
| 0.0538 | 2.0 | 1756 | 0.0614 | 0.9231 | 0.9346 | 0.9288 | 0.9829 |
| 0.0301 | 3.0 | 2634 | 0.0611 | 0.9262 | 0.9361 | 0.9311 | 0.9837 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BeIR/query-gen-msmarco-t5-base-v1 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 1,816 | 2022-08-15T16:07:32Z | ```
!pip install transformers
!pip install torch
```
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToParagraphNeo1.3B")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToParagraphNeo1.3B")
```
```
prompt = """
- advent
- podcasts
- entertainment
- is an industry transformed
- no longer
- consumers touch clicker or turn on radio
- people plug in their earbuds to listen to a podcast
- this changing mediums for reasons
- can be done anywhere
- more optionality in content
text: as podcasts have"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """
- advent
- podcasts
- entertainment
- is an industry transformed
- no longer
- consumers touch clicker or turn on radio
- people plug in their earbuds to listen to a podcast
- this changing mediums for reasons
- can be done anywhere
- more optionality in content
text: as podcasts have"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
Example:
```
- advent
- podcasts
- entertainment
- is an industry transformed
- no longer
- consumers touch clicker or turn on radio
- people plug in their earbuds to listen to a podcast
- this changing mediums for reasons
- can be done anywhere
- more optionality in content
text: as podcasts have proliferated, the entertainment industry has been fundamentally reshaped. in place of flipping through channels or spinning the dial, consumers are plugging in their earbuds to enjoy audio content. this evolution in media consumption is not without explanation, but rather a function of greater portability and content optionality.
***
- newborn
- caring for
- full-time job
- parents
- often have to work normal job
- paid leave needs to be universal
- so parents not overworked
- child is cared for
- can spend special time together
text: tending to a newborn is a full-time job. regrettably, many parents must perform this duty alongside their conventional employment. to spare them from such strain, paid leave must be universal. in this way, children will be provided for, while the parent-child bond will be strengthened.
``` |
BenDavis71/GPT-2-Finetuning-AIRaid | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: "pt"
widget:
- text: "Dispneia importante aos esforços + dor tipo peso no peito no esforço."
- text: "Obeso, has, icc c # cintilografia miocardica para avaliar angina. Discreto edema mmii pricn a esquerda."
- text: "Plastia Mitral ( Insuficiencia ), CRM Saf-2Mg e e Saf-3MG ).(09/03/16). Nega palpitação."
- text: "Uso: AAS 100 -1xd; Metoprolol 25 -1xd; FSM -1xd ; Levotiroxina 175 -1xd; Sinva 40 -1xd; Fluoxetina 20-1xd."
- text: "Refere melhora da dispneia depois da cx porem mantem aos mdoeardos-leves esforço."
datasets:
- TempClinBr
---
# Portuguese NER- TempClinBr - BioBERTpt(clin)
Treinado com BioBERTpt(clin), com o corpus TempClinBr.
Metricas:
```
precision recall f1-score support
0 1.00 0.85 0.92 33
1 0.73 0.69 0.71 78
2 0.75 0.55 0.63 11
3 0.70 0.70 0.70 10
4 0.90 1.00 0.95 71
5 0.75 0.90 0.82 503
6 0.83 0.90 0.87 112
7 0.93 0.90 0.92 2236
8 0.78 0.50 0.61 28
9 0.82 0.84 0.83 291
10 0.79 0.96 0.87 124
11 0.82 0.73 0.77 420
accuracy 0.87 3917
macro avg 0.82 0.79 0.80 3917
weighted avg 0.88 0.87 0.87 3917
```
Parâmetros:
```
device = cuda (Colab)
nclasses = len(tag2id)
nepochs = 50 => parou na 16
batch_size = 16
batch_status = 32
learning_rate = 3e-5
early_stop = 5
max_length = 256
write_path = 'model'
```
Eval no conjunto de teste - TempClinBr
OBS: Avaliação com tag "O" (label 7), se necessário fazer a média sem essa tag.
```
tag2id ={'<pad>': 12,
'B-DepartamentoClinico': 2,
'B-Evidencia': 4,
'B-Ocorrencia': 10,
'B-Problema': 5,
'B-Teste': 6,
'B-Tratamento': 9,
'I-DepartamentoClinico': 3,
'I-Ocorrencia': 8,
'I-Problema': 11,
'I-Teste': 0,
'I-Tratamento': 1,
'O': 7}
precision recall f1-score support
0 0.70 0.30 0.42 99
1 0.84 0.75 0.79 146
2 1.00 0.90 0.95 30
3 0.93 0.93 0.93 14
4 1.00 0.95 0.98 128
5 0.83 0.97 0.89 713
6 0.80 0.80 0.80 194
7 0.93 0.93 0.93 2431
8 0.56 0.20 0.29 51
9 0.86 0.85 0.85 261
10 0.77 0.88 0.82 146
11 0.85 0.82 0.83 645
12 0.00 0.00 0.00 0
accuracy 0.88 4858
macro avg 0.77 0.71 0.73 4858
weighted avg 0.88 0.88 0.88 4858
```
Como citar: **em breve** |
BenGeorge/MyModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-15T17:06:24Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: summarizer-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# summarizer-1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6364
- Validation Loss: 2.9054
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.6364 | 2.9054 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Berzemu/Coco | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "pt"
widget:
- text: "Dispneia importante aos esforços + dor tipo peso no peito no esforço."
- text: "Obeso, has, icc c # cintilografia miocardica para avaliar angina. Discreto edema mmii pricn a esquerda."
- text: "Plastia Mitral ( Insuficiencia ), CRM Saf-2Mg e e Saf-3MG ).(09/03/16). Nega palpitação."
- text: "Uso: AAS 100 -1xd; Metoprolol 25 -1xd; FSM -1xd ; Levotiroxina 175 -1xd; Sinva 40 -1xd; Fluoxetina 20-1xd."
- text: "Refere melhora da dispneia depois da cx porem mantem aos mdoeardos-leves esforço."
datasets:
- TempClinBr
---
# Portuguese NER- TempClinBr - BioBERTpt(bio)
Treinado com BioBERTpt(bio), com o corpus TempClinBr.
Metricas:
```
precision recall f1-score support
0 0.44 0.29 0.35 28
1 0.75 0.60 0.66 420
2 0.57 0.40 0.47 10
3 0.57 0.36 0.44 11
4 0.70 0.85 0.77 124
5 0.72 0.67 0.69 291
6 0.84 0.90 0.87 2236
7 0.78 0.77 0.77 112
8 0.85 0.75 0.80 503
9 0.64 0.56 0.60 78
10 0.81 0.82 0.81 71
11 0.82 1.00 0.90 33
accuracy 0.81 3917
macro avg 0.71 0.66 0.68 3917
weighted avg 0.81 0.81 0.80 3917
```
Parâmetros:
```
device = cuda (Colab)
nclasses = len(tag2id)
nepochs = 50 => parou na 16
batch_size = 16
batch_status = 32
learning_rate = 3e-5
early_stop = 5
max_length = 256
write_path = 'model'
```
Eval no conjunto de teste - TempClinBr
OBS: Avaliação com tag "O" (label 7), se necessário fazer a média sem essa tag.
```
tag2id ={'I-Ocorrencia': 0,
'I-Problema': 1,
'I-DepartamentoClinico': 2,
'B-DepartamentoClinico': 3,
'B-Ocorrencia': 4,
'B-Tratamento': 5,
'O': 6,
'B-Teste': 7,
'B-Problema': 8,
'I-Tratamento': 9,
'B-Evidencia': 10,
'I-Teste': 11,
'<pad>': 12}
precision recall f1-score support
0 0.59 0.20 0.29 51
1 0.77 0.69 0.73 645
2 0.67 0.71 0.69 14
3 0.87 0.43 0.58 30
4 0.71 0.80 0.75 146
5 0.79 0.77 0.78 261
6 0.84 0.93 0.88 2431
7 0.80 0.66 0.73 194
8 0.87 0.83 0.85 713
9 0.83 0.62 0.71 146
10 0.98 0.91 0.94 128
11 0.54 0.21 0.30 99
accuracy 0.83 4858
macro avg 0.77 0.65 0.69 4858
weighted avg 0.82 0.83 0.82 4858
```
Como citar: **em breve** |
BigSalmon/MrLincoln12 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/nomia2011/1660614778038/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1403256770848505857/cE9TrrfP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">نومیا</div>
<div style="text-align: center; font-size: 14px;">@nomia2011</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from نومیا.
| Data | نومیا |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 51 |
| Short tweets | 565 |
| Tweets kept | 2630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18at4zay/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nomia2011's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gvcfr4e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gvcfr4e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nomia2011')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/MrLincoln125MNeo | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- zh
license: apache-2.0
tags:
- bert
inference: true
widget:
- text: "生活的真谛是[MASK]。"
---
# Erlangshen-DeBERTa-v2-710M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLU任务,采用全词掩码的,中文版的7.1亿参数DeBERTa-v2-XLarge。
Good at solving NLU tasks, adopting Whole Word Masking, Chinese DeBERTa-v2-XLarge with 710M parameters.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | DeBERTa-v2 | 710M | 中文 Chinese |
## 模型信息 Model Information
参考论文:[DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://readpaper.com/paper/3033187248)
为了得到一个中文版的DeBERTa-v2-xlarge(710M),我们用悟道语料库(180G版本)进行预训练。我们在MLM中使用了全词掩码(wwm)的方式。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了24张A100(40G)约21天。
To get a Chinese DeBERTa-v2-xlarge (710M), we use WuDao Corpora (180 GB version) for pre-training. We employ the Whole Word Masking (wwm) in MLM. Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 21 days with 24 A100(40G) GPUs.
### 下游任务 Performance
我们展示了下列下游任务的结果:
We present the results on the following tasks:
| Model | AFQMC | TNEWS1.1 | IFLYTEK | OCNLI | CMNLI |
| -------------------------------------------------------------------------------------------------------------------------------- | ------ | -------- | ------- | ------ | ------ |
| RoBERTa-base | 0.7406 | 0.575 | 0.6036 | 0.743 | 0.7973 |
| RoBERTa-large | 0.7488 | 0.5879 | 0.6152 | 0.777 | 0.814 |
| [IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese) | 0.7405 | 0.571 | 0.5977 | 0.7568 | 0.807 |
| [IDEA-CCNL/Erlangshen-DeBERTa-v2-320M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-320M-Chinese) | 0.7498 | 0.5817 | 0.6042 | 0.8022 | 0.8301 |
| **[IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese)** | 0.7549 | 0.5873 | 0.6177 | 0.8012 | 0.8389 |
## 使用 Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese', use_fast=False)
model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-710M-Chinese')
text = '生活的真谛是[MASK]。'
fillmask_pipe = FillMaskPipeline(model, tokenizer, device=-1)
print(fillmask_pipe(text, top_k=10))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
BigSalmon/MrLincoln13 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/hordemommy/1660617228404/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479995491651833867/duT0uSEr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Average Hyperstition Enjoyer</div>
<div style="text-align: center; font-size: 14px;">@hordemommy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Average Hyperstition Enjoyer.
| Data | Average Hyperstition Enjoyer |
| --- | --- |
| Tweets downloaded | 1341 |
| Retweets | 52 |
| Short tweets | 96 |
| Tweets kept | 1193 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1a3bmvot/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hordemommy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mi9wkhr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mi9wkhr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hordemommy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/MrLincoln5 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Electric-Car-Brand-Classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.807692289352417
---
# Electric-Car-Brand-Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### BMW Electric Car

#### Chevrolet Electric Car

#### Hyundai Electric Car

#### Tesla Electric Car

#### Toyota Electric Car
 |
BigSalmon/MrLincoln6 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-linear_lrX100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-linear_lrX100
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6970
- Accuracy: 0.8001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1789 | 1.0 | 50 | 1.3621 | 0.6225 |
| 0.636 | 2.0 | 100 | 0.9176 | 0.6912 |
| 0.5575 | 3.0 | 150 | 0.8543 | 0.7376 |
| 0.5289 | 4.0 | 200 | 0.6970 | 0.8001 |
| 0.4926 | 5.0 | 250 | 0.8232 | 0.7548 |
| 0.4831 | 6.0 | 300 | 0.7442 | 0.7755 |
| 0.4539 | 7.0 | 350 | 0.7484 | 0.7785 |
| 0.4816 | 8.0 | 400 | 0.7038 | 0.7982 |
| 0.4666 | 9.0 | 450 | 0.7277 | 0.7764 |
| 0.4417 | 10.0 | 500 | 0.7289 | 0.7870 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/MrLincolnBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
datasets:
- albertvillanova/legal_contracts
---
# bert-tiny-finetuned-legal-contracts-longer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/google/google/bert_uncased_L-4_H-512_A-8) on the portion of legal_contracts dataset for 1 epoch.
# Note
The model was not trained on the whole dataset which is around 9.5 GB, but only
## The first 20% of `train` + the last 5% of `train`.
```bash
datasets_train = load_dataset('albertvillanova/legal_contracts' , split='train[:20%]')
datasets_validation = load_dataset('albertvillanova/legal_contracts' , split='train[-5%:]')
```
|
BigSalmon/NEO125InformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-08-16T04:48:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9785185185185186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0665
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.286 | 1.0 | 190 | 0.1254 | 0.9581 |
| 0.1916 | 2.0 | 380 | 0.0802 | 0.9744 |
| 0.1155 | 3.0 | 570 | 0.0665 | 0.9785 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/Neo | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7710
- Accuracy: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2830 | 0.7432 |
| 2.627 | 2.0 | 636 | 1.8728 | 0.8403 |
| 1.5429 | 3.0 | 954 | 1.1554 | 0.8910 |
| 1.0089 | 4.0 | 1272 | 0.8530 | 0.9129 |
| 0.7938 | 5.0 | 1590 | 0.7710 | 0.9177 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
BigSalmon/ParaphraseParentheses | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-linear_lrX10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-linear_lrX10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0471
- Accuracy: 0.6686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6226 | 1.0 | 50 | 1.7588 | 0.6209 |
| 1.382 | 2.0 | 100 | 1.5696 | 0.6209 |
| 1.2373 | 3.0 | 150 | 1.3818 | 0.6212 |
| 1.1019 | 4.0 | 200 | 1.2577 | 0.6228 |
| 0.9831 | 5.0 | 250 | 1.1826 | 0.6331 |
| 0.9241 | 6.0 | 300 | 1.1200 | 0.6481 |
| 0.8695 | 7.0 | 350 | 1.0821 | 0.6581 |
| 0.8529 | 8.0 | 400 | 1.0632 | 0.6652 |
| 0.8385 | 9.0 | 450 | 1.0494 | 0.6677 |
| 0.8162 | 10.0 | 500 | 1.0471 | 0.6686 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/ParaphraseParentheses2.0 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: data/img_align_celeba
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-celeb-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `data/img_align_celeba` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 1000
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-celeb-128/tensorboard?#scalars)
|
BigSalmon/PhraseBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: en_QA_2_epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en_QA_2_epochs
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/Points2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-08-16T06:31:57Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.48 +/- 2.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jasheershihab/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
BigSalmon/SimplifyText | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | 2022-08-16T06:47:05Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-linear_lrX1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-linear_lrX1000
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5661
- Accuracy: 0.8325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7558 | 1.0 | 50 | 1.0584 | 0.6462 |
| 0.5971 | 2.0 | 100 | 0.7816 | 0.7510 |
| 0.5382 | 3.0 | 150 | 0.7870 | 0.7520 |
| 0.5045 | 4.0 | 200 | 0.6647 | 0.7880 |
| 0.4717 | 5.0 | 250 | 1.1572 | 0.6053 |
| 0.4651 | 6.0 | 300 | 0.6387 | 0.7945 |
| 0.4205 | 7.0 | 350 | 0.5661 | 0.8325 |
| 0.4423 | 8.0 | 400 | 0.7100 | 0.7846 |
| 0.426 | 9.0 | 450 | 0.7054 | 0.7829 |
| 0.4067 | 10.0 | 500 | 0.6288 | 0.8114 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/T52 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 8 | 2022-08-16T07:03:24Z | ---
license: apache-2.0
---
Models for https://github.com/k2-fsa/icefall/pull/529 |
BigSalmon/T5Salmon2 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 13 | 2022-08-16T07:40:45Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-padpt200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt200
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6540
- Accuracy: 0.6037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2728 | 1.0 | 50 | 1.6540 | 0.6037 |
| 0.8498 | 2.0 | 100 | 1.2559 | 0.6015 |
| 0.7563 | 3.0 | 150 | 1.4192 | 0.5035 |
| 0.701 | 4.0 | 200 | 1.3318 | 0.5641 |
| 0.6592 | 5.0 | 250 | 1.3236 | 0.5666 |
| 0.6404 | 6.0 | 300 | 1.3653 | 0.5469 |
| 0.6315 | 7.0 | 350 | 1.4052 | 0.5082 |
| 0.6306 | 8.0 | 400 | 1.2818 | 0.5590 |
| 0.6297 | 9.0 | 450 | 1.3096 | 0.5659 |
| 0.6056 | 10.0 | 500 | 1.3595 | 0.5368 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/TS3 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-16T07:50:24Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6642
- Rouge1: 12.9097
- Rouge2: 3.2756
- Rougel: 12.2885
- Rougelsum: 12.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| No log | 1.0 | 300 | 5.0305 | 4.4679 | 0.4134 | 4.3487 | 4.2807 |
| 9.3723 | 2.0 | 600 | 3.8535 | 10.6408 | 2.6198 | 10.5538 | 10.5819 |
| 9.3723 | 3.0 | 900 | 3.7590 | 12.1502 | 3.3079 | 12.013 | 12.1208 |
| 4.3429 | 4.0 | 1200 | 3.7019 | 13.0029 | 3.7708 | 12.9759 | 12.876 |
| 4.3429 | 5.0 | 1500 | 3.6782 | 13.1362 | 3.0904 | 12.561 | 12.5702 |
| 4.0043 | 6.0 | 1800 | 3.6698 | 12.8674 | 3.8026 | 12.3664 | 12.4216 |
| 4.0043 | 7.0 | 2100 | 3.6644 | 12.9581 | 3.3843 | 12.407 | 12.3956 |
| 3.872 | 8.0 | 2400 | 3.6642 | 12.9097 | 3.2756 | 12.2885 | 12.3186 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.