modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test3
This model is a fine-tuned version of [jcblaise/bert-tagalog-base-cased](https://huggingface.co/jcblaise/bert-tagalog-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3960
- Accuracy: 0.8683
- Precision: 0.8316
- Recall: 0.8653
- F1: 0.8481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 151 | 0.3770 | 0.8431 | 0.8287 | 0.7951 | 0.8115 |
| No log | 2.0 | 302 | 0.3561 | 0.8528 | 0.7959 | 0.8790 | 0.8354 |
| No log | 3.0 | 453 | 0.3425 | 0.8647 | 0.8636 | 0.8094 | 0.8356 |
| 0.3579 | 4.0 | 604 | 0.3541 | 0.8615 | 0.8090 | 0.8824 | 0.8441 |
| 0.3579 | 5.0 | 755 | 0.3717 | 0.8611 | 0.8075 | 0.8836 | 0.8438 |
| 0.3579 | 6.0 | 906 | 0.3657 | 0.8691 | 0.8352 | 0.8619 | 0.8483 |
| 0.1703 | 7.0 | 1057 | 0.3826 | 0.8700 | 0.8370 | 0.8619 | 0.8493 |
| 0.1703 | 8.0 | 1208 | 0.3960 | 0.8683 | 0.8316 | 0.8653 | 0.8481 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
ArBert/roberta-base-finetuned-ner-kmeans-twitter
|
[
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-02-23T12:36:45Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- Tritkoman/autotrain-data-oldenglish5
co2_eq_emissions:
emissions: 10.382242558236783
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3684798314
- CO2 Emissions (in grams): 10.3822
## Validation Metrics
- Loss: 2.959
- SacreBLEU: 11.287
- Gen len: 13.759
|
Aran/DialoGPT-medium-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
widget:
- text: "generate analogy: mammal is to whale"
example_title: "Analogy Example 1 (semantic relation)"
- text: "generate analogy: wedding is to marriage"
example_title: "Analogy Example 1 (semantic relation, metaphor)"
- text: "generate analogy: London is to U.K."
example_title: "Analogy Example 2 (entity)"
- text: "generate analogy: actual is to actually"
example_title: "Analogy Example 3 (morphological)"
---
# relbert/t5-large-analogy
This is [t5-large](https://huggingface.co/t5-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity)
for analogy generation, which is to generate a word pair (eg. `bird is to crow`) given a query (eg. `mammal is to whale`)
so that the query and the generated word pair form an analogy statement.
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="relbert/t5-large-analogy")
output = pipe("generate analogy: mammal is to whale")
print(output)
>>> [{'generated_text': 'bird is to crow'}]
```
|
AriakimTaiyo/DialoGPT-medium-Kumiko
|
[
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
language:
- en
library_name: keras
tags:
- code
pipeline_tag: image-classification
---
<h1>README for Pathway Vision Transformer</h1><br>
<p>PaViT is a Pathway Vision Transformer (PaViT)-based image recognition model developed by Ajibola Emmanuel Oluwaseun. The model is inspired by Google's PaLM (Pathways Language Model) and aims to demonstrate the potential of using few-shot learning techniques in image recognition tasks.</p>
<h1>Model Performance</h1>
PaViT was trained on a 4GB RAM CPU using a dataset of 15000 Kaggle images of 15 classes, achieving a remarkable 88% accuracy with 4 self-attention heads. The model's accuracy further improved to 96% when trained with 12 self-attention heads and 12 linearly stacked linear layers. These results demonstrate the model's impressive performance and fast training speed on a CPU, despite being trained on a relatively small dataset.
<br>The uploaded weight was trained on image dataset of 3 classes (Cat, Dog and Wild animal) </br>
<h1>Usage</h1>
The model can be used for image recognition tasks by using the trained weights provided in the repository. The code can be modified to use custom datasets, and the model's performance can be further improved by adding more self-attention heads and linear layers.
<h1>Contribution</h1>
The author believes that PaViT has the potential to outperform existing Vision Transformer models and is eager to see it continue to evolve through the contributions of developers and other contributors.
<br></br>
Contributions to the project are welcome and can be made through pull requests. Developers can also report issues or suggest new features for the project.
<h1>License</h1>
<p>This project is licensed under the MIT License.</p>
<h1>How to use:</h1>
```ruby
#import Libraries
!pip install huggingface_hub["tensorflow"]
import matplotlib.pyplot as plt
import cv2
from huggingface_hub import from_pretrained_keras
```
<h1>On inference</h1><br>
```ruby
#load model
model=from_pretrained_keras('Ajibola/PaViT')
#load image
image=cv2.imread('image_path')
image=cv2.resize(image, (224, 224)) #224 is the default image size
image=image.image.max() #Normalize the image to [0-1]
prediction=model.predict(image)
prediction=np.argmax(prediction, axis=-1) #Get Highest probability class
```
|
Arnold/wav2vec2-hausa-demo-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sufficiency-ukp-balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sufficiency-ukp-balanced
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1493
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 69 | 0.2807 | 0.9007 |
| No log | 2.0 | 138 | 0.1804 | 0.9338 |
| No log | 3.0 | 207 | 0.1493 | 0.9559 |
| No log | 4.0 | 276 | 0.1558 | 0.9559 |
| No log | 5.0 | 345 | 0.1601 | 0.9559 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: openrail
---
Model coming asap
Check the original model here: https://huggingface.co/wimvanhenden/blade-runner-2049-v1
|
ArpanZS/search_model
|
[
"joblib"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.41 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -177.37 +/- 85.15
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Yureeh/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
ArshdeepSekhon050/DialoGPT-medium-RickAndMorty
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
library_name: keras
pipeline_tag: image-segmentation
---
Semantic segmentation model for segmenting sidewalks from other object in an image.<br>
Utilizes U-Net with Resnet34 backbone for transfer learning.<br>
Trained on 512x512 images and expects images with even dimensions.<br>
|
Ashl3y/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-23T14:34:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 559.00 +/- 81.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga michalcisek5 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga michalcisek5 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga michalcisek5
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Aspect11/DialoGPT-Medium-LiSBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-02-23T14:42:51Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Leonhard17/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Augustvember/wokka4
|
[
"conversational"
] |
conversational
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: IM_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IM_Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Augustvember/wokkabottest2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.47 +/- 20.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aurora/asdawd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mst_hp_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mst_hp_1
This model is a fine-tuned version of [Sjdan/mst_1](https://huggingface.co/Sjdan/mst_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Wer: 1.1807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3325 | 1.36 | 500 | 0.9884 | 1.8972 |
| 1.3794 | 2.72 | 1000 | 0.9791 | 1.6573 |
| 0.9313 | 4.09 | 1500 | 0.4419 | 1.3988 |
| 0.388 | 5.45 | 2000 | 0.1630 | 1.3645 |
| 0.1358 | 6.81 | 2500 | 0.0452 | 1.1807 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Ayham/albert_distilgpt2_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-data_augmented-with_XLMR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-data_augmented-with_XLMR
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1625
- Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5396 | 1.0 | 1 | 1.1625 | 0.12 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Ayham/bert_distilgpt2_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 311.00 +/- 3.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayham/bert_gpt2_summarization_cnndm
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 5139.00 +/- 608.91
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayham/distilbert_gpt2_summarization_cnndm
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- NameThisGame-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: NameThisGame-v5
type: NameThisGame-v5
metrics:
- type: mean_reward
value: 12910.00 +/- 1650.29
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **NameThisGame-v5**
This is a trained model of a PPO agent playing NameThisGame-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id NameThisGame-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id NameThisGame-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'NameThisGame-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayham/roberta_roberta_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: enlacinglines/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ayham/robertagpt2_xsum
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-02-23T16:21:22Z |
---
tags:
- MsPacman-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacman-v5
type: MsPacman-v5
metrics:
- type: mean_reward
value: 1464.00 +/- 322.56
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MsPacman-v5**
This is a trained model of a PPO agent playing MsPacman-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id MsPacman-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MsPacman-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayham/robertagpt2_xsum4
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- Boxing-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Boxing-v5
type: Boxing-v5
metrics:
- type: mean_reward
value: 99.30 +/- 1.27
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Boxing-v5**
This is a trained model of a PPO agent playing Boxing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Boxing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Boxing-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayham/xlnet_gpt2_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- Kangaroo-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Kangaroo-v5
type: Kangaroo-v5
metrics:
- type: mean_reward
value: 4860.00 +/- 174.36
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5**
This is a trained model of a PPO agent playing Kangaroo-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Kangaroo-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Kangaroo-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayran/DialoGPT-small-gandalf
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | 2023-02-23T16:27:35Z |
---
tags:
- Hero-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hero-v5
type: Hero-v5
metrics:
- type: mean_reward
value: 17666.50 +/- 2869.59
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hero-v5**
This is a trained model of a PPO agent playing Hero-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Hero-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Hero-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayran/DialoGPT-small-harry-potter-1-through-3
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-02-23T16:27:41Z |
---
tags:
- Hero-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hero-v5
type: Hero-v5
metrics:
- type: mean_reward
value: 19562.00 +/- 38.42
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hero-v5**
This is a trained model of a PPO agent playing Hero-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Hero-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Hero-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
Ayu/Shiriro
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Hero-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hero-v5
type: Hero-v5
metrics:
- type: mean_reward
value: 20034.00 +/- 180.80
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hero-v5**
This is a trained model of a PPO agent playing Hero-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Hero-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hero-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Hero-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AyushPJ/ai-club-inductions-21-nlp-ALBERT
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-02-23T16:30:31Z |
---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -0.20 +/- 1.66
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
|
[
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"ElectraForQuestionAnswering"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-02-23T16:31:07Z |
---
tags:
- DoubleDunk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: DoubleDunk-v5
type: DoubleDunk-v5
metrics:
- type: mean_reward
value: -0.40 +/- 1.74
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 337.10 +/- 133.37
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AyushPJ/test-squad-trained-finetuned-squad
|
[
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="1itai1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Azaghast/GPT2-SCP-Miscellaneous
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.41 +/- 2.49
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r SRobbins/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
BE/demo-sentiment2021
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BSC-LT/roberta-base-bne-capitel-ner
|
[
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mst_hp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mst_hp2
This model is a fine-tuned version of [Sjdan/mst_1](https://huggingface.co/Sjdan/mst_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Wer: 1.5888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.1539 | 1.36 | 500 | 2.8040 | 1.5607 |
| 2.2802 | 2.72 | 1000 | 1.2387 | 2.3707 |
| 1.1976 | 4.09 | 1500 | 0.4206 | 1.8754 |
| 0.6861 | 5.45 | 2000 | 0.2622 | 1.6916 |
| 0.5078 | 6.81 | 2500 | 0.2128 | 1.5888 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
BSC-LT/roberta-large-bne-sqac
|
[
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 496.84 +/- 23.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BSC-LT/roberta-large-bne
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
language:
- no
license: apache-2.0
tags:
- whisper-event
- norwegian
datasets:
- NbAiLab/NCC_S
- NbAiLab/NPSC
- NbAiLab/NST
metrics:
- wer
model-index:
- name: Whisper Small Norwegian Bokmål
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS
type: google/fleurs
config: nb_no
split: validation
args: nb_no
metrics:
- name: Wer
type: wer
value: 15.56
---
# Whisper Small Norwegian Bokmål
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) trained on NCC_S_3-NRKonly.
It is currently in the middle of a large training.
## Model description
The model is trained on a large corpus of roughly 4.000 hours of voice. The sources are subtitles from the Norwegian broadcaster NRK.
## Intended uses & limitations
The model will be free for everyone to use when it is finished.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 128
- gradient_accumulation_steps: 2
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant with warmup
- lr_scheduler_warmup_steps: 1000
- training_steps: 50.000 (currently @1.000)
- mixed_precision_training: fp16
- deepspeed: true
### Live Training results
See [Tensorboad Metrics](https://huggingface.co/NbAiLab/whisper-small-3NRKonly-nob/tensorboard)
|
BW/TEST
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: enlacinglines/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Babelscape/rebel-large
|
[
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:Babelscape/rebel-dataset",
"transformers",
"seq2seq",
"relation-extraction",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"has_space"
] |
text2text-generation
|
{
"architectures": [
"BartForConditionalGeneration"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9,458 | 2023-02-23T17:13:46Z |
---
license: openrail++
tags:
- stable-diffusion
- text-to-image
- openvino
---
# Stable Diffusion v2-1 Model for OpenVINO
A fork of [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) exported to OpenVINO using [Optimum Intel](https://github.com/huggingface/optimum-intel) 🤗
```python
from optimum.intel.openvino import OVStableDiffusionPipeline
model_id = "echarlaix/stable-diffusion-2-1-openvino"
pipe = OVStableDiffusionPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Rembrandt"
image = pipe(prompt).images[0]
image.save("sailing_ship.png")
```
|
Babelscape/wikineural-multilingual-ner
|
[
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ru",
"multilingual",
"dataset:Babelscape/wikineural",
"transformers",
"named-entity-recognition",
"sequence-tagger-model",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 41,608 | 2023-02-23T17:13:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.19 +/- 20.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Babysittingyoda/DialoGPT-small-familyguy
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | 2023-02-23T17:14:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2291
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2289 | 1.0 | 1563 | 0.1912 | 0.9268 |
| 0.1492 | 2.0 | 3126 | 0.2291 | 0.9318 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Bagus/SER-LSSED
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Wer: 1.8162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.0933 | 1.36 | 500 | 3.1846 | 1.0 |
| 2.7062 | 2.72 | 1000 | 1.7891 | 2.3240 |
| 1.0986 | 4.09 | 1500 | 0.3844 | 2.1682 |
| 0.3024 | 5.45 | 2000 | 0.0961 | 1.8006 |
| 0.1238 | 6.81 | 2500 | 0.0617 | 1.8162 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Bagus/ser-japanese
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mst_hp3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mst_hp3
This model is a fine-tuned version of [Sjdan/mst_1](https://huggingface.co/Sjdan/mst_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0270
- Wer: 1.4050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3299 | 1.36 | 500 | 0.5122 | 1.8785 |
| 0.6247 | 2.72 | 1000 | 0.1506 | 1.5639 |
| 0.3223 | 4.09 | 1500 | 0.0540 | 1.7539 |
| 0.1549 | 5.45 | 2000 | 0.0296 | 1.5265 |
| 0.0893 | 6.81 | 2500 | 0.0270 | 1.4050 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Bala/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd5
This model is a fine-tuned version of [Theju/sd5](https://huggingface.co/Theju/sd5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
- Wer: 1.1713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.536 | 0.68 | 500 | 2.4893 | 2.5109 |
| 2.0499 | 1.36 | 1000 | 1.3903 | 2.3178 |
| 1.2147 | 2.04 | 1500 | 0.5195 | 1.8536 |
| 0.6346 | 2.72 | 2000 | 0.1633 | 1.2617 |
| 0.3675 | 3.41 | 2500 | 0.1510 | 1.3115 |
| 0.2561 | 4.09 | 3000 | 0.1246 | 1.6760 |
| 0.1612 | 4.77 | 3500 | 0.0781 | 1.4330 |
| 0.111 | 5.45 | 4000 | 0.0811 | 1.3676 |
| 0.0669 | 6.13 | 4500 | 0.0582 | 1.1900 |
| 0.0575 | 6.81 | 5000 | 0.0472 | 1.1713 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
BaptisteDoyen/camembert-base-xnli
|
[
"pytorch",
"tf",
"camembert",
"text-classification",
"fr",
"dataset:xnli",
"transformers",
"zero-shot-classification",
"xnli",
"nli",
"license:mit",
"has_space"
] |
zero-shot-classification
|
{
"architectures": [
"CamembertForSequenceClassification"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 405,474 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.12 +/- 17.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Barleysack/AERoberta2
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -160.58 +/- 103.91
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo_utils'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'toinsson/ppo-cartpole-v0'
'huggingface_token': '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'
'batch_size': 512
'minibatch_size': 128}
```
|
Barytes/hellohf
|
[
"tf",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- generated_from_trainer
model-index:
- name: vlad-gpt2-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vlad-gpt2-generator
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 29 | 4.4130 |
| No log | 2.0 | 58 | 4.3853 |
| No log | 3.0 | 87 | 4.3768 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Battlehooks/distilbert-base-uncased-finetuned-squad
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 189.30 +/- 84.71
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'SuburbanLion/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Baybars/debateGPT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### roboetics-mix - Clean from civit.ai https://civitai.com/models/3738/roboetics-mix
|
Baybars/wav2vec2-xls-r-300m-cv8-turkish
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
All models banned from Civitai for various reasons (not legal ones). Do what you want with that.
|
BearThreat/distilbert-base-uncased-finetuned-cola
|
[
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: apache-2.0
---
```python
from optimum.intel.openvino import OVStableDiffusionPipeline
model_id = "hf-internal-testing/tiny-stable-diffusion-openvino"
pipe = OVStableDiffusionPipeline.from_pretrained(model_id)
```
|
Bee-Garbs/DialoGPT-cartman-small
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert_scivocab_uncased-v10-ES-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-v10-ES-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4185
- Precision: 0.6897
- Recall: 0.7616
- F1: 0.7239
- Accuracy: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3431 | 1.75 | 500 | 0.2748 | 0.6883 | 0.7114 | 0.6996 | 0.9210 |
| 0.1592 | 3.5 | 1000 | 0.3008 | 0.7108 | 0.7598 | 0.7345 | 0.9255 |
| 0.0891 | 5.24 | 1500 | 0.3634 | 0.6839 | 0.7132 | 0.6983 | 0.9214 |
| 0.0484 | 6.99 | 2000 | 0.3894 | 0.6831 | 0.7505 | 0.7152 | 0.9239 |
| 0.029 | 8.74 | 2500 | 0.4185 | 0.6897 | 0.7616 | 0.7239 | 0.9263 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Bella4322/Sarah
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-23T18:49:29Z |
---
language: en
thumbnail: http://www.huggingtweets.com/1jo_0-inkspirate_art/1677178518645/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1555297913361793025/56-M8aWg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1587822296978579457/OIGp8r5g_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Inkspirate | Commission Open & 一条レイ</div>
<div style="text-align: center; font-size: 14px;">@1jo_0-inkspirate_art</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Inkspirate | Commission Open & 一条レイ.
| Data | Inkspirate | Commission Open | 一条レイ |
| --- | --- | --- |
| Tweets downloaded | 2005 | 3231 |
| Retweets | 805 | 800 |
| Short tweets | 373 | 2027 |
| Tweets kept | 827 | 404 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/i3h4iuki/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @1jo_0-inkspirate_art's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xnss78wm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xnss78wm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/1jo_0-inkspirate_art')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BenGeorge/MyModel
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.73 +/- 18.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Benicio/t5-small-finetuned-en-to-ru
|
[
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 50 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 22.40 +/- 58.52
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: openrail
language:
- en
- id
library_name: diffusers
tags:
- art
---
# Embedding for Diffusion Model
Some of them are not mine, but I love to collect em, so all rights reserved into their owner.
## Screenshots


## Tech Type
**Client:** Embeddings
**Server:** AI generated art
|
Bia18/Beatriz
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Yolov4 Models
https://openvisionapi.com
# License
AGPLv3
|
Biasface/DDDC2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
language:
- en
metrics:
- sacrebleu
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for DistilGutenMystery
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.
# Table of Contents
- [Model Card for DistilGutenMystery](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** apache-2.0
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
- [GitHub Repo](https://github.umn.edu/quigl088/Distil-Guten-Mystery)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Aiding story writing and brainstorming for novels. Possible use for generating nonsensical and absurd texts.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
This model does not distinguish fact from fiction, therefore the model is not intended to support use-cases that require the generated text to be true.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
There's the possibility of out-dated language being used that might reflect certain bias' and if the model is ever to be deployed it is highly recommended to do further bias related fine-tuning and other related testing.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Corpus was created from 20 books about mystery and detective stories collected from project Gutenberg (gutenberg.org/ on 2/20/23) for the purpose of aiding in story writing for mystery/detective novels.
In total there are 1,048,519 tokens in the entire corpus collected from the following 20 various mystery/detective style books:
The Extraordinary Adventures of Arsène Lupin, Gentleman-Burglar, by Maurice Leblanc: 55,726 tokens
The Crimson Cryptogram A Detective Story by Fergus Hume: 60,179 tokens
The House of a Thousand Candles by Meredith Nicholson: 83,133 tokens
Tracked by Wireless by William Le Queux: 76,236 tokens
Behind the Green Door, by Mildred A. Wirt: 43,705 tokens
The house on the cliff by Franklin W. Dixon: 41,721 tokens
Tales of Secret Egypt by Sax Rohmer: 76,892 tokens
The Haunted Bookshop by Christopher Morley: 63,269 tokens
Whispering Walls, by Mildred A. Wirt: 42,388 tokens
The Clock Struck One by Fergus Hume: 61,614 tokens
McAllister and His Double by Arthur Cheney Train: 65,583 tokens
The Three Eyes by Maurice Leblanc: 62,887 tokens
Ghost Beyond the Gate by Mildred A. Wirt: 41,172 tokens
The Motor Rangers Through the Sierras by John Henry Goldfrap: 49,285 tokens
Peggy Finds the Theatre by Virginia Hughes: 41,575 tokens
The Puzzle in the Pond by Margaret Sutton: 36,485 tokens
Jack the runaway; or, On the road with a circus by Frank V. Webster: 42,814 tokens
The Camp Fire Girls Solve a Mystery; Or, The Christmas Adventure at Carver House: 50,286 tokens
Danger at the Drawbridge by Mildred A. Wirt: 42,075 tokens
Voice from the Cave by Mildred A. Wirt: 39,064 tokens
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
Each story was downloaded from Project Gutenberg, where the “Gutenberg” specific texts were removed from the document, along with chapter headings. Then stories were combined into a single text document that was then loaded as a dataset, sampled by paragraph.
Stated hyper-parameters for training: num_train+epochs=30, per_device+train_batch_size=32, and all other trainer values were left as default values.
Additionally, the tokenizer was set with padding_side=’left’, and the model’s pad_token_id was set to the tokenizer.eos_token_id, and num_labels=0.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The fine-tuned model was evaluated using the sacrebleu metric.
## Results
score: 0.2458566059729917
counts: [56008, 5821, 552, 181]
totals: [1014368, 985984, 957908, 930569]
precisions: [5.52146755418152, 0.5903746916785668, 0.057625575733786544, 0.019450465252979627]
bp: 1.0
sys_len: 1014368
ref_len: 212162
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Hugging Face, Jack Quigley
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('jquigl/DistilGutenMystery')
model = AutoModelForCausalLM.from_pretrained('jquigl/DistilGutenMystery')
generator = pipeline('text-generation', model = model, tokenizer = tokenizer)
gen = generator("It was a strange ending to a", min_length = 100, max_length = 150, num_return_sequences=3)
</details>
|
BigDaddyNe1L/Hhaa
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-23T20:08:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: finetuned-byt5-small-french-financial-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-byt5-small-french-financial-summarization
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3908
- Rouge1: 38.3821
- Rouge2: 25.1524
- Rougel: 32.4821
- Rougelsum: 33.6907
- Gen Len: 255.0
- Bertscore: 0.7099
- Bartscore: 0.5213
- Bleurt: -0.5166
- Meteor: 0.3293
- Frugal Score (mover-score): 0.3950
- Frugal Score (bert-score): 0.3950
- Cider: 2.0671
- Infolm Kl Divergence: -1.8542
- Infolm Beta Divergence: 1.349
- Infolm L1 Distance: 1.1568
- Infolm Fisher Rao Distance: 1.6107
- Baryscore: 0.8075
- Depthscore: 0.1326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bertscore | Bartscore | Bleurt | Meteor | Frugal Score (mover-score) | Frugal Score (bert-score) | Cider | Infolm Kl Divergence | Infolm Beta Divergence | Infolm L1 Distance | Infolm Fisher Rao Distance | Baryscore | Depthscore |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:---------:|:-------:|:------:|:--------------------------:|:-------------------------:|:------:|:--------------------:|:----------------------:|:------------------:|:--------------------------:|:---------:|:----------:|
| 1.6567 | 1.0 | 388 | 0.4711 | 5.0871 | 1.1231 | 4.9658 | 4.9611 | 19.0 | 0.5846 | 0.2422 | -1.517 | 0.0165 | -0.1685 | -0.1685 | 0.0000 | -3.0673 | 2.2832 | 1.4655 | 2.0738 | 1.0352 | 0.1853 |
| 0.5737 | 2.0 | 776 | 0.4319 | 5.1032 | 0.9523 | 4.8016 | 4.8303 | 19.0 | 0.5837 | 0.2416 | -1.5066 | 0.0156 | -0.1632 | -0.1632 | 0.0000 | -3.0586 | 2.2843 | 1.4734 | 2.0705 | 1.0352 | 0.1787 |
| 0.4973 | 3.0 | 1164 | 0.4149 | 5.3057 | 0.921 | 4.907 | 4.9704 | 19.0 | 0.5901 | 0.2427 | -1.5002 | 0.015 | -0.1608 | -0.1608 | 0.0000 | -2.9793 | 2.1962 | 1.4493 | 2.0508 | 0.9943 | 0.168 |
| 0.4684 | 4.0 | 1552 | 0.4099 | 5.3502 | 0.9357 | 4.9875 | 5.0373 | 19.0 | 0.5876 | 0.2422 | -1.4993 | 0.0147 | -0.1619 | -0.1619 | 0.0000 | -3.0476 | 2.2649 | 1.466 | 2.0704 | 0.9943 | 0.168 |
| 0.4451 | 5.0 | 1940 | 0.4009 | 5.1829 | 0.9931 | 4.953 | 4.9566 | 19.0 | 0.5875 | 0.2409 | -1.4945 | 0.0149 | -0.1624 | -0.1624 | 0.0000 | -2.9977 | 2.2391 | 1.4634 | 2.0625 | 1.0352 | 0.168 |
| 0.4296 | 6.0 | 2328 | 0.4006 | 5.2969 | 1.0497 | 5.0524 | 5.095 | 19.0 | 0.5885 | 0.2409 | -1.4904 | 0.0149 | -0.1608 | -0.1608 | 0.0000 | -3.0277 | 2.2529 | 1.4648 | 2.068 | 1.0352 | 0.168 |
| 0.417 | 7.0 | 2716 | 0.3939 | 5.3043 | 1.1314 | 5.1078 | 5.1487 | 19.0 | 0.5886 | 0.2413 | -1.4883 | 0.0157 | -0.1609 | -0.1609 | 0.0000 | -3.0082 | 2.2557 | 1.4666 | 2.0657 | 0.9845 | 0.168 |
| 0.4093 | 8.0 | 3104 | 0.3919 | 5.3213 | 1.0211 | 5.0701 | 5.1163 | 19.0 | 0.5889 | 0.2414 | -1.4896 | 0.0148 | -0.1611 | -0.1611 | 0.0000 | -3.0291 | 2.2436 | 1.4615 | 2.0644 | 1.0352 | 0.168 |
| 0.4023 | 9.0 | 3492 | 0.3918 | 5.3035 | 1.0808 | 5.0803 | 5.1161 | 19.0 | 0.5905 | 0.2410 | -1.4863 | 0.0152 | -0.1613 | -0.1613 | 0.0000 | -3.0528 | 2.273 | 1.4684 | 2.0708 | 1.0352 | 0.168 |
| 0.4008 | 10.0 | 3880 | 0.3908 | 5.3011 | 1.0808 | 5.1077 | 5.1454 | 19.0 | 0.5906 | 0.2414 | -1.4860 | 0.0152 | -0.1611 | -0.1611 | 0.0000 | -3.0383 | 2.2605 | 1.4686 | 2.0694 | 1.0352 | 0.168 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.13.2
|
BigSalmon/BestMask2
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: This is the Mem logo.
---
### Mem, Jasper, Writer testing Dreambooth model trained by ktkeller with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
This is the Jasper logo. (use that on your prompt)
This is the Mem logo. (use that on your prompt)

|
BigSalmon/DaBlank
|
[
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 4 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.30 +/- 12.39
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/InformalToFormalLincoln19
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
language:
- pt
library_name: nemo
datasets:
- mozilla-foundation/common_voice_12_0
tags:
- automatic-speech-recognition
model-index:
- name: stt_pt_citrinet_512_gamma_0_25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 12.0
type: mozilla-foundation/common_voice_12_0
config: clean
split: test
args:
language: pt
metrics:
- name: Test WER
type: wer
value: 6.033
license: bsd-3-clause
---
# NVIDIA Streaming Citrinet 512 (pt-PT)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
## Attribution
As initial checkpoint used [stt_en_citrinet_512_gamma_0_25](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_citrinet_512_gamma_0_25) by [NVIDIA](https://github.com/NVIDIA) licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42 | null |
### Training Code
```python
from torch.utils.data import dataset
from datasets import load_dataset, load_from_disk
from tqdm import tqdm
from datasets import load_metric
from transformers import (
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
AutoTokenizer,
AutoModelForSeq2SeqLM,
DataCollatorForSeq2Seq
)
import evaluate
import os
from datasets import load_dataset
import numpy as np
MAX_LENGTH_INPUT = 512+128
MAX_LENGTH_OUTPUT = 2
from datasets import load_dataset
class Seq2SeqDataset(dataset.Dataset):
def __init__(self, tokenizer, type_data='train'):
# Set up the datasets
data_path = "CarperAI/openai_summarize_comparisons"
if type_data == 'train':
dataset = load_dataset("CarperAI/openai_summarize_comparisons", split="train")
else:
dataset = load_dataset("CarperAI/openai_summarize_comparisons", split="test").select(range(20000))
self.prompts = []
self.outputs = []
inputs = dataset["prompt"]
choosen = dataset["chosen"]
rejected = dataset["rejected"]
for i, (inp, ch, re) in enumerate(zip(inputs, choosen, rejected)):
choice_first = np.random.choice([ch, re])
res = "A" if choice_first == ch else "B"
choice_second = ch if choice_first == re else re
prompt = f"""POST: {inp}\n\nRESPONSE A: {choice_first}\n\nRESPONSE B: {choice_second}\n\nWhich response is better? RESPONSE"""
output = f"{res}"
self.prompts.append(prompt)
self.outputs.append(output)
print("Example prompt: ", self.prompts[0])
print("Example output: ", self.outputs[0])
self.tokenizer = tokenizer
def __len__(self):
return len(self.prompts)
def __getitem__(self, idx):
input_text = self.prompts[idx]
output_text = self.outputs[idx]
model_input = self.tokenizer(
input_text,
max_length=MAX_LENGTH_INPUT,
padding='max_length',
truncation=True
)
with self.tokenizer.as_target_tokenizer():
labels = self.tokenizer(
output_text,
max_length=MAX_LENGTH_OUTPUT,
padding='max_length',
truncation=True
)["input_ids"]
model_input['labels'] = labels
model_input['labels'] = [-100 if token == self.tokenizer.pad_token_id else token for token in model_input['labels']]
return model_input
import wandb
wandb.init(name="stanfordnlp/SteamSHP-flan-t5-xl", project="trlx", entity="pvduy")
if __name__=="__main__":
config = {
"logging_steps": 100,
"eval_steps": 100,
"save_steps": 500,
"batch_size": 4,
"batch_size_val": 4,
"warmup_steps": 100,
"accum_steps": 2,
"num_beams": 3,
"output_dir": "flan-t5-rm",
}
accuracy_metric = evaluate.load("accuracy")
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
acc = sum(np.array(labels_str) == np.array(pred_str)) / len(labels_str)
return {"accuracy": acc}
training_args = Seq2SeqTrainingArguments(
output_dir=config["output_dir"],
do_train=True,
num_train_epochs=5,
do_eval=False,
predict_with_generate=True,
adam_beta1=0.9,
adam_beta2=0.999,
learning_rate=5e-5,
half_precision_backend=True,
bf16=True,
per_device_train_batch_size=config["batch_size"],
per_device_eval_batch_size=config["batch_size_val"],
logging_steps=config["logging_steps"],
evaluation_strategy="epoch",
warmup_steps=config["warmup_steps"],
eval_accumulation_steps=1,
lr_scheduler_type="linear",
save_strategy="epoch",
gradient_accumulation_steps=config["accum_steps"],
deepspeed='configs/ds_configs/ds_config_gpt_2.json',
)
tokenizer = AutoTokenizer.from_pretrained("stanfordnlp/SteamSHP-flan-t5-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("stanfordnlp/SteamSHP-flan-t5-xl")
train_dataset = Seq2SeqDataset(tokenizer, type_data='train')
val_dataset = Seq2SeqDataset(tokenizer, type_data='val')
print("Train dataset size: ", len(train_dataset))
print("Val dataset size: ", len(val_dataset))
params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f"Number of trainable parameters: {params}")
trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
### Inference Code
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from datasets import load_dataset
import numpy as np
import torch
from tqdm import tqdm
dataset = load_dataset("CarperAI/openai_summarize_comparisons", split="test")
tokenizer = AutoTokenizer.from_pretrained("flan-t5-rm/checkpoint-4338/")
model = AutoModelForSeq2SeqLM.from_pretrained("flan-t5-rm/checkpoint-4338/")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
df = dataset.to_pandas()
predictions = []
for i, row in tqdm(df.iterrows(), total=len(df)):
prompt = f"""POST: {row["prompt"]}\n\nRESPONSE A: {row["chosen"]}\n\nRESPONSE B: {row["rejected"]}\n\nWhich response is better? RESPONSE"""
x = tokenizer([prompt], return_tensors='pt').input_ids.to(device)
y = model.generate(x, max_new_tokens=1)
predictions.append(tokenizer.batch_decode(y, skip_special_tokens=True)[0])
print("Accuracy: ", sum(np.array(predictions) == 'A') / len(predictions))
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 18 | null |
---
tags:
- chemistry
- molecule
- drug
---
# Roberta Zinc 480m
This is a Roberta style masked language model trained on ~480m SMILES strings from the [ZINC database](https://zinc.docking.org/).
The model has ~102m parameters and was trained for 150000 iterations with a batch size of 4096 to a validation loss of ~0.122.
This model is useful for generating embeddings from SMILES strings.
```python
from transformers import RobertaTokenizerFast, RobertaForMaskedLM, DataCollatorWithPadding
tokenizer = RobertaTokenizerFast.from_pretrained("entropy/roberta_zinc_480m", max_len=128)
model = RobertaForMaskedLM.from_pretrained('entropy/roberta_zinc_480m')
collator = DataCollatorWithPadding(tokenizer, padding=True, return_tensors='pt')
smiles = ['Brc1cc2c(NCc3ccccc3)ncnc2s1',
'Brc1cc2c(NCc3ccccn3)ncnc2s1',
'Brc1cc2c(NCc3cccs3)ncnc2s1',
'Brc1cc2c(NCc3ccncc3)ncnc2s1',
'Brc1cc2c(Nc3ccccc3)ncnc2s1']
inputs = collator(tokenizer(smiles))
outputs = model(**inputs, output_hidden_states=True)
full_embeddings = outputs[1][-1]
mask = inputs['attention_mask']
embeddings = ((full_embeddings * mask.unsqueeze(-1)).sum(1) / mask.sum(-1).unsqueeze(-1))
```
---
license: mit
---
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 73 | null |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- safetensors
---
----
# SD-Silicon
SD-Silicon: A series of general-purpose models based off the experimental automerger, autoMBW.
A collaborative creation of Xerxemi#6423 & Xynon#7407.

All models listed have baked WD1.3 VAE. However, for the purposes of this model series, external VAE is also recommended.
----
# Licence
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here :https://huggingface.co/spaces/CompVis/stable-diffusion-license
# Terms of use
- **Clearly indicate where modifications have been made.**
If you used it for merging, please state what steps you took to do so.
----
# --base models--
Silicon28: a.k.a. extestg4. The first model of autoMBW to match/surpass quality of manual merge block weight merges.
Silicon29: a.k.a. extesto4. a similar, but much larger list of merges based off the list of Silicon28. First good model to be constructed on a semi-stabilized autoMBW codebase.
# --specialty models--
Silicon28-negzero: a.k.a. extestg4-negzero. A negatively finetuned version of Silicon28 for 10 epochs off a dataset of 3990 images. Better at some, worse at others.
Silicon29-dark: a.k.a. extesto4-dark. Silicon29, but merged with noise offset. Gives darker output than the original base.
# --future models--
More will be posted soon<sup>TM</sup>
----
# Recommended Settings
Sampler: DPM++ 2M
Steps: 42 + 42 | can probably go lower, I just run at this
Upscaler: Latent (bicubic antialiased)
Denoising: ~0.5 to ~0.6
CFG: 13
----
more comparisons here: https://medium.com/@media_97267/the-automated-stable-diffusion-checkpoint-merger-autombw-44f8dfd38871
Note: all comparison photos are pure Silicon29 with the latent bicubic antialiased upscaler.




----
# Q: Why is this named Silicon?
A: Silicon's atomic number is 14. This line of models was originally supposed to be the 14th experimental model in Xynon/models, a.k.a. experimental14a/b/c.
# Q: Where do I find the automerger used to make these models?
A: https://github.com/Xerxemi/sdweb-auto-MBW | preliminary article here: https://medium.com/@media_97267/the-automated-stable-diffusion-checkpoint-merger-autombw-44f8dfd38871
----
|
CAMeL-Lab/bert-base-arabic-camelbert-ca
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 580 | null |
---
license: mit
tags:
- NLP
datasets:
- Yaxin/SemEval2014Task4Raw
metrics:
- f1
- precision
- recall
pipeline_tag: text2text-generation
---
# joint_tk-instruct-base-def-pos-laptops
This model is finetuned for the Joint Task. The finetuning was carried out by adding prompts of the form:
- definition + 2 positive examples
The prompt is prepended onto each input review. It is important to note that **this model output was finetuned on samples from the laptops domains.**
The code for the official implementation of the paper [**InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis**](https://arxiv.org/abs/2302.08624) can be
found [here](https://github.com/kevinscaria/InstructABSA).
For the Joint Task, this model is the current SOTA.
## Training data
InstructABSA models are trained on the benchmark dataset for Aspect Based Sentiment Analysis tasks viz. SemEval 2014. This [dataset](https://alt.qcri.org/semeval2014/task4/index.php?id=data-and-tools) consists of reviews
from laptops and restaurant domains and their corresponding aspect term and polarity labels.
### BibTeX entry and citation info
If you use this model in your work, please cite the following paper:
```bibtex
@inproceedings{Scaria2023InstructABSAIL,
title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis},
author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral},
year={2023}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.15 +/- 14.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 54 | null |
---
license: mit
tags:
- NLP
datasets:
- Yaxin/SemEval2014Task4Raw
metrics:
- f1
- precision
- recall
pipeline_tag: text2text-generation
language:
- en
---
# joint_tk-instruct-base-def-pos-neg-neut-laptops
This model is finetuned for the Joint Task. The finetuning was carried out by adding prompts of the form:
- definition + 2 positive examples + 2 negative examples + 2 neutral examples
The prompt is prepended onto each input review. It is important to note that **this model output was finetuned on samples from the laptops domains.**
The code for the official implementation of the paper [**InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis**](https://arxiv.org/abs/2302.08624) can be
found [here](https://github.com/kevinscaria/InstructABSA).
For the Joint Task, this model is the current SOTA.
## Training data
InstructABSA models are trained on the benchmark dataset for Aspect Based Sentiment Analysis tasks viz. SemEval 2014. This [dataset](https://alt.qcri.org/semeval2014/task4/index.php?id=data-and-tools) consists of reviews
from laptops and restaurant domains and their corresponding aspect term and polarity labels.
### BibTeX entry and citation info
If you use this model in your work, please cite the following paper:
```bibtex
@inproceedings{Scaria2023InstructABSAIL,
title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis},
author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral},
year={2023}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 45 | null |
---
license: mit
language:
- ko
pipeline_tag: text-generation
widget:
- text: 딥러닝 모델은
---
# gpt2-ko
Korean gpt2 model, trained from scratch.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="mykor/gpt2-ko")
```
```python
>>> pipe("오늘 점심 뭐먹지?")
[{'generated_text': '오늘 점심 뭐먹지?치킨과 족발 먹으려고 ㅎ난 치킨먹구싶당 ㅎㅎ나 낼 아침에 먹을겡 ㅎ치킨 먹고시퍼 ㅎㅎ난 치킨에닭도리탕..난 닭도리탕~난 치킨먹었어 ㅎ치킨은 족'}]
```
```python
>>> pipe("애플은 이번 업데이트를 통해")
[{'generated_text': "애플은 이번 업데이트를 통해 안드로이드 플랫폼 내에서 '모바일 카드'를 판매할 예정'이라며 '기존에는 안드로이드 마켓 내에서만 결제가 가능했다.앞으로는 pc를 통해 결제할 수 있을 것'이라고 덧붙였다.한편, sk텔레콤은 이달 초에도 '갤럭시 s8"}]
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 34 | null |
---
tags:
- autotrain
- text-classification
- healthcare
- sdoh
- social determinants of health
language:
- en
widget:
- text: The Patient is homeless
- text: The pt misuses prescription medicine
- text: The patient often goes hungry because they can't afford enough food
- text: >-
The patient's family is struggling to pay the rent and is at risk of being
evicted from their apartment
- text: The patient lives in a neighborhood with poor public transportation options
- text: >-
The patient was a victim of exploitation of dependency, causing them to feel
taken advantage of and vulnerable
- text: >-
The patient's family has had to move in with relatives due to financial
difficulties
- text: >-
The patient's insurance plan has annual limits on certain preventive care
services, such as screenings and vaccines.
- text: >-
The depression may be provoking the illness or making it more difficult to
manage
- text: >-
Due to the language barrier, the patient is having difficulty communicating
their medical history to the healthcare provider.
datasets:
- reachosen/autotrain-data-sdohv7
co2_eq_emissions:
emissions: 0.01134763220649804
pipeline_tag: text-classification
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3701198597
- CO2 Emissions (in grams): 0.0113
## Validation Metrics
- Loss: 0.057
- Accuracy: 0.990
- Macro F1: 0.990
- Micro F1: 0.990
- Weighted F1: 0.990
- Macro Precision: 0.990
- Micro Precision: 0.990
- Weighted Precision: 0.991
- Macro Recall: 0.990
- Micro Recall: 0.990
- Weighted Recall: 0.990
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/reachosen/autotrain-sdohv7-3701198597
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("reachosen/autotrain-sdohv7-3701198597", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("reachosen/autotrain-sdohv7-3701198597", use_auth_token=True)
inputs = tokenizer("The Patient is homeless", return_tensors="pt")
outputs = model(**inputs)
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 132 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.55 +/- 16.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,862 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-2x-12-3-6-6-1-l
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 75 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: dotunadegbite/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 71 | null |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8124
- Accuracy: 0.8324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3990
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1012 | 0.1 | 398 | 1.9809 | 0.38 |
| 1.0416 | 1.1 | 796 | 1.6140 | 0.56 |
| 0.2096 | 2.1 | 1194 | 1.5776 | 0.66 |
| 0.7101 | 3.1 | 1592 | 1.2004 | 0.74 |
| 1.2344 | 4.1 | 1990 | 1.9621 | 0.58 |
| 0.1809 | 5.1 | 2388 | 1.6322 | 0.71 |
| 0.0011 | 6.1 | 2786 | 1.8266 | 0.71 |
| 0.0951 | 7.1 | 3184 | 1.5910 | 0.78 |
| 0.4047 | 8.1 | 3582 | 1.9999 | 0.7 |
| 0.0011 | 9.1 | 3980 | 1.5903 | 0.78 |
| 0.001 | 10.0 | 3990 | 1.5903 | 0.78 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: deberta-v3-small-Tweet_About_Disaster_Or_Not
results: []
language:
- en
---
# deberta-v3-small-Tweet_About_Disaster_Or_Not
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2942
- Accuracy: 0.9050
- F1: 0.7453
- Recall: 0.7453
- Precision: 0.7453
## Model description
This is a binary classification model to determine if tweet input samples are about a disaster or not.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Binary%20Classification/Transformer%20Comparison/Is%20This%20Tweet%20Referring%20to%20a%20Disaster%20or%20Not%3F%20-%20DeBERTa.ipynb
### Associated Projects
This project is part of a comparison of multiple transformers. The others can be found at the following links:
- https://huggingface.co/DunnBC22/roberta-base-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/albert-base-v2-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/electra-base-emotion-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/ernie-2.0-base-en-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Tweet_About_Disaster_Or_Not
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/vstepanenko/disaster-tweets
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.4167 | 1.0 | 143 | 0.3148 | 0.8830 | 0.7164 | 0.7925 | 0.6537 |
| 0.255 | 2.0 | 286 | 0.2942 | 0.9050 | 0.7453 | 0.7453 | 0.7453 |
| 0.1935 | 3.0 | 429 | 0.3022 | 0.8874 | 0.7288 | 0.8113 | 0.6615 |
| 0.1512 | 4.0 | 572 | 0.3405 | 0.8786 | 0.7172 | 0.8255 | 0.6341 |
| 0.1192 | 5.0 | 715 | 0.3618 | 0.8909 | 0.7373 | 0.8208 | 0.6692 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: roberta-base-Tweet_About_Disaster_Or_Not
results: []
language:
- en
---
# roberta-base-Tweet_About_Disaster_Or_Not
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2640
- Accuracy: 0.8989
- F1: 0.7569
- Recall: 0.8211
- Precision: 0.7020
## Model description
This is a binary classification model to determine if tweet input samples are about a disaster or not.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Binary%20Classification/Transformer%20Comparison/Is%20This%20Tweet%20Referring%20to%20a%20Disaster%20or%20Not%3F%20-%20RoBERTa.ipynb
### Associated Projects
This project is part of a comparison of multiple transformers. The others can be found at the following links:
- https://huggingface.co/DunnBC22/deberta-v3-small-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/albert-base-v2-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/electra-base-emotion-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/ernie-2.0-base-en-Tweet_About_Disaster_Or_Not
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Tweet_About_Disaster_Or_Not
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/vstepanenko/disaster-tweets
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.372 | 1.0 | 143 | 0.3067 | 0.8690 | 0.7205 | 0.8807 | 0.6095 |
| 0.2356 | 2.0 | 286 | 0.2640 | 0.8989 | 0.7569 | 0.8211 | 0.7020 |
| 0.165 | 3.0 | 429 | 0.3029 | 0.8997 | 0.7635 | 0.8440 | 0.6970 |
| 0.1118 | 4.0 | 572 | 0.3256 | 0.8971 | 0.7578 | 0.8394 | 0.6906 |
| 0.0766 | 5.0 | 715 | 0.3733 | 0.9024 | 0.7711 | 0.8578 | 0.7004 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 229 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PendantTest_SD21_v1 Dreambooth model trained by DFStewart with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

|
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-seinfeld
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-seinfeld
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3835 | 0.99 | 26 | 3.2201 |
| 3.316 | 1.99 | 52 | 3.1480 |
| 3.2054 | 2.99 | 78 | 3.1031 |
| 3.1206 | 3.99 | 104 | 3.0799 |
| 3.0525 | 4.99 | 130 | 3.0655 |
| 2.9891 | 5.99 | 156 | 3.0589 |
| 2.9358 | 6.99 | 182 | 3.0504 |
| 2.8765 | 7.99 | 208 | 3.0493 |
| 2.8189 | 8.99 | 234 | 3.0497 |
| 2.7579 | 9.99 | 260 | 3.0471 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21 | null |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- aidatatang_200zh
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/aidatatang_200zh_e_branchformer_e16`
This model was trained by Yifan Peng using aidatatang_200zh recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 232a317a66eda6c5caee094db4b714bc912dce95
pip install -e .
cd egs2/aidatatang_200zh/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/aidatatang_200zh_e_branchformer_e16
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Feb 22 23:08:40 CST 2023`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.13.1`
- Git hash: `232a317a66eda6c5caee094db4b714bc912dce95`
- Commit date: `Wed Feb 22 14:22:01 2023 -0600`
## exp/asr_train_asr_e_branchformer_e16_linear1024_lr1e-3_newspecaug_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|24216|82.4|17.6|0.0|0.0|17.6|17.6|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|48144|79.9|20.1|0.0|0.0|20.1|20.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|234524|96.7|2.9|0.4|0.2|3.4|17.6|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|468933|96.1|3.5|0.4|0.2|4.1|20.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_e_branchformer_e16_linear1024_lr1e-3_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_e_branchformer_e16_linear1024_lr1e-3_newspecaug_raw_zh_char_sp
ngpu: 1
seed: 0
num_workers: 6
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38803
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 35000
token_list:
- <blank>
- <unk>
- 我
- 的
- 你
- 么
- 不
- 是
- 了
- 一
- 有
- 天
- 什
- 好
- 在
- 个
- 怎
- 吗
- 话
- 要
- 给
- 电
- 上
- 没
- 人
- 说
- 到
- 啊
- 就
- 这
- 时
- 来
- 下
- 想
- 打
- 点
- 去
- 还
- 看
- 道
- 多
- 明
- 那
- 知
- 以
- 今
- 能
- 会
- 哪
- 都
- 可
- 大
- 吧
- 机
- 样
- 里
- 十
- 现
- 们
- 过
- 吃
- 开
- 家
- 回
- 发
- 中
- 呢
- 听
- 候
- 为
- 也
- 日
- 爱
- 歌
- 三
- 起
- 小
- 二
- 心
- 子
- 手
- 生
- 最
- 儿
- 学
- 放
- 信
- 女
- 号
- 几
- 和
- 老
- 晚
- 少
- 车
- 叫
- 快
- 用
- 自
- 年
- 睡
- 问
- 事
- 后
- 五
- 乐
- 安
- 出
- 找
- 帮
- 意
- 觉
- 气
- 国
- 得
- 情
- 请
- 早
- 地
- 做
- 首
- 真
- 公
- 近
- 对
- 办
- 很
- 行
- 己
- 呀
- 八
- 友
- 如
- 六
- 节
- 喜
- 新
- 欢
- 西
- 间
- 月
- 班
- 他
- 网
- 方
- 分
- 播
- 笑
- 查
- 息
- 名
- 四
- 成
- 东
- 美
- 零
- 市
- 饭
- 世
- 朋
- 玩
- 州
- 果
- 才
- 七
- 别
- 把
- 谁
- 九
- 再
- 平
- 太
- 干
- 思
- 关
- 谢
- 高
- 语
- 理
- 些
- 界
- 着
- 长
- 钱
- 动
- 曲
- 感
- 聊
- 片
- 何
- 面
- 男
- 音
- 工
- 南
- 午
- 本
- 通
- 火
- 经
- 路
- 星
- 唱
- Q
- 业
- 讲
- 英
- 北
- 服
- 短
- 妈
- 海
- 文
- 跟
- 作
- 票
- 只
- 等
- 刚
- 码
- 字
- 影
- 附
- 婆
- 见
- 又
- 祝
- 无
- 该
- 提
- 末
- 让
- 法
- 定
- 买
- 告
- 照
- 体
- 考
- 床
- 醒
- 记
- 前
- 题
- 走
- 加
- 主
- 从
- 视
- 张
- 身
- 两
- 钟
- 京
- 于
- 收
- 阳
- 哈
- 店
- 山
- 院
- 站
- 百
- 宝
- 所
- 诉
- 期
- 之
- 嘛
- 夜
- 第
- 游
- 比
- 系
- 昨
- 费
- 交
- 水
- 应
- 次
- 周
- 亲
- 联
- 全
- 福
- 江
- 孩
- 区
- 广
- 头
- 接
- O
- 校
- 已
- 空
- 门
- 认
- 相
- 度
- 实
- 活
- 色
- 假
- 白
- 算
- 外
- 流
- 啦
- 花
- 然
- 结
- 每
- 休
- 边
- 部
- 位
- 场
- 半
- 王
- 声
- 件
- 力
- 金
- 重
- 识
- 正
- 华
- 光
- 衣
- 载
- 死
- 价
- 翻
- 图
- 城
- 脑
- 同
- 久
- 译
- 特
- 物
- 搜
- 务
- 报
- 线
- 哦
- 卡
- E
- 当
- A
- 爸
- 圣
- 完
- 幺
- 合
- P
- 雨
- 黄
- 种
- 司
- 直
- I
- 她
- 哥
- 书
- 银
- 试
- 解
- 穿
- 酒
- 准
- 换
- 望
- 被
- S
- 原
- 内
- 诞
- 带
- 介
- 口
- 清
- N
- 马
- 习
- 否
- 置
- 啥
- 索
- 戏
- 与
- 懂
- 飞
- 需
- 性
- 错
- 送
- 级
- 器
- 单
- 离
- 远
- 备
- 师
- 课
- 注
- 因
- 难
- 其
- 像
- 元
- 消
- 表
- 便
- 球
- 风
- 教
- 故
- 科
- 李
- 常
- 林
- 龙
- 呵
- 数
- 代
- 总
- 忘
- 商
- 变
- 婚
- 苹
- 红
- 格
- 坐
- 绍
- 答
- 量
- 冷
- 青
- 询
- 春
- 神
- 省
- 蛋
- 姐
- 陪
- 兴
- 利
- 台
- 句
- 万
- 计
- 保
- 刘
- 传
- 深
- 管
- 运
- 德
- 医
- 容
- 品
- 越
- 亮
- 词
- 河
- 化
- 宁
- 始
- 武
- 希
- 洗
- 复
- 设
- 处
- 技
- 房
- T
- 您
- 取
- 眼
- 县
- 笨
- 术
- 温
- 永
- 受
- 更
- 先
- 尔
- 程
- 彩
- 演
- 忙
- 专
- 愿
- 进
- 湖
- 建
- 况
- 伤
- 喝
- 底
- 卖
- 功
- 录
- 改
- H
- 剧
- 预
- 梦
- L
- 达
- 连
- 馆
- 包
- 写
- 客
- C
- 汉
- 条
- G
- 幸
- 民
- 读
- 职
- 目
- 但
- 贝
- 妹
- 资
- 较
- 雪
- 赛
- 除
- 招
- 园
- 住
- 超
- 汽
- 病
- B
- 软
- 反
- 而
- 证
- 员
- 黑
- 庆
- D
- 求
- 排
- 装
- 岁
- 顾
- 产
- 航
- 言
- 斯
- 拨
- 历
- 烦
- 及
- 药
- 入
- 式
- 军
- 餐
- 志
- 至
- 双
- 米
- 版
- 掉
- 千
- 者
- 充
- 微
- 失
- 转
- M
- 亚
- 克
- 座
- 丽
- 络
- 战
- 使
- 猪
- 具
- 闹
- 限
- 址
- 基
- 油
- 漂
- 陈
- Y
- 川
- 强
- 挺
- 奇
- 杰
- 政
- 向
- 速
- 康
- 差
- 贵
- 搞
- 义
- 奖
- 份
- 户
- 楼
- 苏
- 任
- 健
- 易
- 毛
- 型
- 石
- 礼
- 款
- 持
- 卫
- 怕
- 恋
- 邮
- 集
- R
- 铁
- 圳
- 拿
- 云
- 队
- 鱼
- 慢
- 顺
- 害
- 属
- 傻
- 营
- 菜
- 货
- 麻
- 咋
- 坏
- 冒
- 累
- 杨
- 闻
- 治
- 选
- 段
- K
- 香
- 闭
- 兰
- 牌
- 局
- 留
- 舍
- 非
- 推
- 室
- 简
- 拉
- 修
- 终
- 郑
- 切
- U
- 将
- 村
- 沙
- 存
- 帅
- 诗
- 率
- 密
- 巴
- 频
- 士
- 初
- 楚
- 股
- 热
- 古
- 制
- 支
- 肉
- 岛
- 统
- 适
- 肥
- 鸡
- 调
- 街
- 类
- 牛
- 导
- 农
- 值
- 食
- 镇
- 棍
- 移
- 韩
- W
- 嗯
- 订
- 呼
- 命
- V
- 必
- 宿
- 皮
- 升
- 确
- 随
- 步
- 育
- 标
- 唐
- 精
- 决
- 木
- 由
- 弟
- 往
- 肯
- 够
- 或
- 指
- 阿
- 象
- 料
- 念
- 助
- 许
- 共
- 母
- 约
- 罗
- 板
- 秋
- 配
- 魔
- 宜
- 般
- 荐
- 扰
- 舒
- 逼
- 狗
- 嘿
- 博
- 售
- 满
- 疼
- 脸
- 整
- 抱
- 季
- 减
- 养
- 怀
- 免
- 未
- 乘
- F
- 社
- 妇
- 列
- 爷
- 删
- 旦
- 弄
- 概
- 停
- 拜
- 维
- 领
- 示
- 套
- 汇
- 昌
- 晨
- 痛
- 购
- 奥
- 铃
- 案
- 济
- 鬼
- 背
- 港
- 待
- 浪
- 桥
- 血
- 冬
- 烧
- 优
- 拍
- 际
- 急
- 杭
- 称
- 遇
- 赶
- 旅
- 智
- 角
- 财
- 玉
- 团
- 形
- 论
- 静
- 景
- 退
- 普
- 呗
- 乡
- 参
- 胡
- 伦
- 讨
- 艺
- 辈
- 毒
- 此
- 轻
- 苦
- 咱
- 画
- 泰
- 宾
- 雄
- 销
- 奶
- 突
- 波
- 各
- 冰
- 块
- 夏
- 低
- 兵
- 厅
- 羊
- 杀
- 紧
- 泉
- 朝
- 谈
- 足
- 孕
- 夫
- 厂
- 聪
- 续
- 庄
- 诺
- 牙
- 质
- 立
- 依
- 仙
- 跑
- 盘
- 豆
- 它
- 怪
- 猜
- 漫
- 毕
- 兄
- 颜
- 险
- 厦
- 验
- 防
- 登
- 敢
- 乖
- 晓
- 护
- 迎
- 逗
- 摩
- 佳
- 观
- 骗
- 烟
- 细
- 临
- 惠
- 围
- 寞
- 效
- 源
- 寂
- 肚
- 暖
- 饺
- 斗
- 模
- 端
- 疗
- 付
- 绝
- 秘
- 展
- 乎
- 按
- 富
- 靠
- 范
- 规
- 刻
- 折
- 娘
- 厌
- 申
- 章
- 补
- 笔
- 锅
- 破
- 田
- 齐
- 滨
- 皇
- 族
- 典
- 史
- 左
- 蓝
- 灵
- 澡
- 秀
- 诚
- 土
- 测
- 凤
- 剑
- 响
- 倒
- 睛
- 惯
- 乌
- 币
- 扣
- 吴
- 输
- 徐
- 弃
- 纪
- 堂
- 环
- 甲
- 菲
- 缘
- 讯
- 根
- 落
- 启
- 泡
- 饿
- 积
- 府
- 递
- 绩
- 择
- 吉
- 布
- 显
- 童
- 租
- 洋
- 组
- 划
- 编
- 签
- 舞
- 困
- 贴
- 负
- 派
- 裤
- 担
- 桂
- 却
- 丝
- 丰
- 箱
- 赵
- 群
- 序
- 训
- 酸
- 惜
- 圆
- 评
- 压
- 俩
- 状
- 官
- 酷
- 鲁
- 孙
- 草
- 极
- 势
- 斤
- 腾
- 泽
- 素
- 尽
- 姓
- 屏
- 聚
- 莞
- 乱
- 雅
- 尼
- 趣
- 伟
- 肤
- 勇
- 右
- 徽
- 投
- 丹
- 尾
- 托
- 争
- 鸟
- 激
- 印
- 良
- 眠
- 松
- 跳
- 途
- 篮
- 粉
- 脚
- 屁
- 鞋
- 麦
- 则
- 估
- 津
- 努
- 距
- 胸
- 央
- 珍
- 盖
- 哭
- 洲
- 练
- 敏
- 雷
- 曾
- 恩
- 挂
- 据
- 览
- 耳
- 材
- 泪
- 吸
- 味
- 劳
- 父
- 孤
- 玛
- 旁
- 阴
- 态
- 创
- 树
- 脱
- 研
- 驾
- 拾
- 灯
- 虎
- 爆
- 嘉
- 湾
- 躺
- 猫
- 莫
- 昆
- 痘
- 阅
- 射
- 刷
- 卓
- 珠
- 峰
- 胖
- 坚
- 造
- 举
- 棒
- 梅
- 引
- 吵
- 蒙
- 详
- 借
- 瓜
- 池
- 束
- 芳
- 淘
- 寻
- 释
- 沈
- 虑
- 锦
- 胜
- 荣
- 委
- 默
- 另
- 浏
- 并
- 检
- 冠
- 独
- 厉
- 顶
- 钓
- 骂
- 且
- 欧
- 威
- 熟
- 获
- 兽
- 严
- 炎
- 含
- 厕
- 盛
- 翼
- 盟
- 余
- 姨
- 洛
- 映
- 狼
- 谅
- 众
- 宽
- 断
- 止
- 狂
- 凉
- 姑
- 辉
- 若
- 册
- 谷
- 幻
- 篇
- 瓶
- 席
- 恐
- 柔
- 迪
- 供
- 追
- 控
- 爽
- 互
- 嫁
- 宋
- 宫
- 瑞
- 滚
- 增
- 额
- 页
- 刀
- 娱
- 茶
- 钢
- 疯
- 梁
- 承
- 娜
- 须
- 陆
- 燕
- 迟
- 君
- 恶
- 遍
- 纸
- 项
- 丁
- 腿
- 误
- 殊
- 迅
- 锁
- 宇
- 媳
- 培
- 居
- 寄
- 纯
- 嘴
- 浙
- 境
- 搭
- 杯
- 插
- 朱
- 溪
- 甘
- 权
- 窝
- 警
- 糖
- 迷
- 圈
- 凯
- 帝
- 暴
- 逛
- 艳
- 击
- 颗
- 坦
- 杂
- 冲
- 谓
- 救
- 轮
- 晕
- 虽
- 塔
- 叔
- 凰
- 懒
- 议
- 肖
- 郎
- 辛
- 透
- 拥
- 鼠
- 顿
- 批
- 兔
- 尚
- 聘
- 藏
- 赚
- 继
- 享
- 欺
- 潮
- 即
- 甜
- 骨
- 悲
- 幕
- 滴
- 闲
- 液
- 缺
- 琴
- 蜜
- 善
- 暗
- 镜
- 蔡
- 吹
- 核
- 忆
- 键
- 辑
- 岗
- 例
- 涛
- 宏
- 刺
- 郭
- 降
- 秦
- 剩
- 绿
- 桌
- 咖
- 呐
- 叶
- 贸
- 架
- 账
- 亡
- 佛
- 哎
- 乳
- 归
- 忍
- 异
- 侠
- 龄
- 炒
- 洁
- 似
- 虚
- 贷
- 征
- 抽
- 败
- 枪
- 幼
- 丫
- 危
- 慰
- 究
- 婷
- 肃
- 箭
- 灰
- 届
- 律
- 秒
- 淡
- 偷
- 炫
- 鲜
- 浦
- 萨
- 旧
- 硬
- 操
- 混
- 施
- 散
- 咨
- 妻
- 吻
- 榜
- 呆
- 废
- 野
- 糕
- 骑
- 炼
- 震
- 恭
- 悔
- 跨
- 曼
- 啡
- 俊
- 晶
- 胃
- 汤
- 尊
- 貌
- 封
- 羽
- 赞
- 尸
- 隐
- 丢
- 霸
- 醉
- 盗
- 盐
- 浩
- 著
- 档
- 赢
- 幽
- 责
- 鼻
- 辣
- 恒
- 朵
- 慕
- 旗
- 娃
- 饰
- 仁
- 亦
- 竟
- 柳
- 郁
- 唯
- 夕
- 钻
- 均
- 劲
- 庭
- 巧
- 饮
- 涨
- 辆
- 傅
- 企
- 趟
- 避
- 党
- 染
- 扬
- 玲
- 筋
- 烤
- 桃
- 唉
- 慧
- 欲
- 寒
- 闷
- 某
- 恨
- 私
- 淮
- 惊
- 弱
- 弹
- 沉
- 兼
- 弯
- 残
- 偶
- 锋
- 贺
- 咯
- 纳
- 戴
- 抢
- 宗
- 浴
- 宵
- 莲
- 嗨
- 喊
- 奕
- 壁
- 症
- 冻
- 致
- 屋
- 喽
- 伊
- 绵
- 玫
- 固
- 籍
- 监
- 耐
- 井
- 寝
- 露
- 虫
- 盒
- 凡
- 摇
- 傲
- 烈
- 姿
- 陕
- 裸
- 袋
- 帐
- 凌
- 寿
- 茂
- 鹏
- 寓
- 柴
- 妞
- 森
- 既
- 紫
- 萝
- 层
- 苗
- 腊
- 邓
- 宣
- 锡
- 袜
- 陌
- 狮
- 碰
- 晴
- 塘
- 妃
- 祥
- 苍
- 针
- 敌
- 腰
- 犯
- 欠
- 垃
- 卸
- 迹
- 暑
- 祖
- 泳
- 阵
- 熊
- 励
- 澳
- 添
- 拳
- 岳
- 益
- 瘦
- 虹
- 圾
- 植
- 坡
- 攻
- 略
- 墙
- 描
- 遗
- 噢
- 窗
- 吐
- 肌
- 陵
- 逃
- 浮
- 摸
- 戒
- 哟
- 翰
- 勿
- 库
- 涯
- 妖
- 宠
- 脾
- 革
- 探
- 糊
- 采
- 惹
- 衡
- 赤
- 魏
- 羡
- 综
- 舟
- 疆
- 痴
- 催
- 朗
- 坛
- 悠
- 岭
- 驶
- 括
- 嘻
- 辽
- 粥
- 煮
- 灭
- 杜
- 域
- 令
- 替
- 翔
- 坤
- 潘
- 抓
- 铜
- 构
- 卷
- 茫
- 丑
- 涂
- 掌
- 饱
- 肝
- 疾
- 罩
- 谱
- 愚
- 抗
- 琳
- 夸
- 汪
- 墨
- 沟
- 翅
- 肠
- 患
- 柏
- 僵
- 稳
- 延
- 胆
- 伴
- 爬
- 滋
- 歉
- 轩
- 尿
- 铺
- 忠
- 黎
- 膀
- 邯
- 郸
- 愉
- 霉
- 翁
- 妙
- 隆
- 鸭
- 锻
- 涵
- 挣
- 副
- 罪
- 穷
- 恢
- 巨
- 吓
- 眉
- 棉
- 汗
- 溜
- 奏
- 滩
- 愁
- X
- 执
- 霞
- 魂
- 姆
- 摄
- 偏
- 纠
- 瑰
- 洪
- 协
- 牧
- 飘
- 炸
- 悦
- 艾
- 织
- 敬
- 驹
- 欣
- 董
- 邦
- 勒
- 守
- 伙
- 狐
- 税
- 湘
- 遥
- 储
- 脏
- 坊
- 腐
- 横
- 仔
- 仪
- 判
- 忽
- 哇
- 罚
- 爹
- 怖
- 竹
- 孔
- 捡
- 挑
- 肿
- 漠
- 尘
- 焦
- 塞
- 熬
- 谊
- 樱
- 返
- 莉
- 堵
- 捷
- 惑
- 绕
- 蛇
- 竞
- 耍
- 违
- 卧
- 蝶
- J
- 俗
- 滑
- 占
- 怜
- 舅
- 乔
- 泸
- 臭
- 策
- 骚
- 莱
- 岩
- 魅
- 兑
- 姥
- 兆
- 萍
- 烂
- 损
- 述
- 撒
- 烫
- 炮
- 忧
- 遵
- 桑
- 俺
- 彭
- 净
- 胶
- 柯
- 绑
- 碟
- 卜
- 饼
- 船
- 佩
- 妆
- 齿
- 厚
- 娟
- 醋
- 丘
- 恼
- 萧
- 析
- 润
- 潭
- 番
- 鹰
- 葡
- 萄
- 唤
- 胎
- 逊
- 峡
- 舰
- 障
- 伯
- 猴
- 膜
- 访
- 贤
- 耀
- 晒
- 狠
- 豪
- 剪
- 帖
- 幂
- 融
- 诱
- 韶
- 晋
- 拼
- 洞
- 氧
- 察
- 裁
- 寨
- 熙
- 喂
- 拖
- 污
- 乾
- 湿
- 嫌
- 拒
- 蕉
- 哲
- 薇
- 绒
- 婴
- 莎
- 稿
- 瞎
- 寺
- 徒
- 伞
- 碎
- 阜
- 填
- 琪
- 敦
- 柜
- 侣
- 搬
- 孟
- 蓉
- 筒
- 偿
- 献
- 径
- 畅
- 粤
- 悟
- 隔
- 赖
- 慈
- 哄
- 襄
- 扮
- 睁
- 彻
- 陶
- 瓷
- 荷
- 寸
- 牵
- 痒
- 芝
- 繁
- 倍
- 闪
- 梧
- 怒
- 蝴
- 嵩
- 赣
- 嘞
- 狱
- 猛
- 咳
- 媒
- 斌
- 斑
- 奋
- 叉
- 龟
- 贱
- 疑
- 暂
- 靓
- 叹
- 仓
- 撞
- 姜
- 疤
- 矿
- 芬
- 勤
- 纱
- 帆
- 迁
- 囧
- 佑
- 囊
- 侯
- 鼓
- 葛
- 沃
- 莹
- 诊
- 筑
- 酱
- 咬
- 糟
- 拯
- 鹤
- 驴
- 胞
- 枝
- 俄
- 呃
- 鹿
- 磨
- 姚
- 灾
- 扫
- 荡
- 吊
- 犬
- 菊
- 茹
- 链
- 嫉
- 妒
- 旺
- 夺
- 裙
- 湛
- 氏
- 鞍
- 抵
- 娇
- 耶
- 截
- 辞
- 硫
- 禁
- 怡
- 跌
- 刮
- 苑
- 媛
- 摆
- 盾
- 械
- 旋
- 卢
- 霆
- 驰
- 擦
- 符
- 肺
- 谜
- 霍
- 仅
- 迈
- 碗
- 邪
- 曹
- 咪
- 煌
- 疫
- 屠
- 握
- 奔
- Z
- 燃
- 沧
- 谦
- 馨
- 嫖
- 阻
- 冯
- 振
- 雕
- 闯
- 薄
- 宙
- 倾
- 嗽
- 椒
- 墓
- 尤
- 夹
- 潇
- 骤
- 壮
- 屈
- 颖
- 菠
- 吞
- 鸣
- 渴
- 堰
- 厨
- 督
- 驻
- 腹
- 岸
- 蛮
- 翠
- 肾
- 娼
- 券
- 尖
- 丸
- 鸿
- 厘
- 召
- 劝
- 牡
- 韦
- 拔
- 灏
- 弦
- 萌
- 惩
- 倩
- 诸
- 扎
- 庙
- 炉
- 潜
- 措
- 磊
- 脂
- 郊
- 虾
- 霜
- 猎
- 蝎
- 玄
- 钰
- 审
- 蜂
- 巷
- 敷
- 拟
- 钥
- 匙
- 婉
- 纽
- 芜
- 贾
- 串
- 靖
- 抛
- 彼
- 亏
- 挽
- 贼
- 穴
- 授
- 鼎
- 孝
- 玮
- 氓
- 劫
- 俞
- 谎
- 莆
- 隋
- 钠
- 赔
- 谐
- 纶
- 闰
- 昏
- 逆
- 璇
- 樊
- 禽
- 宅
- 碳
- 妮
- 亭
- 杆
- 蠢
- 鄙
- 蜀
- 阶
- 贫
- 辰
- 盼
- 呜
- 芦
- 株
- 腔
- 巾
- 羞
- 堡
- 亿
- 踩
- 憾
- 浓
- 阔
- 塑
- 趋
- 蓄
- 桶
- 葱
- 菇
- 咒
- 蟹
- 肩
- 柿
- 缓
- 漳
- 祸
- 挤
- 巢
- 抚
- 詹
- 豫
- 俱
- 悉
- 溶
- 粒
- 谭
- 诛
- 贡
- 沿
- 躲
- 慌
- 芙
- 蒋
- 乃
- 雀
- 姻
- 岂
- 悄
- 辕
- 斜
- 捕
- 扇
- 割
- 啤
- 纲
- 纤
- 祛
- 躁
- 殖
- 珊
- 氢
- 允
- 丈
- 蹈
- 邀
- 哼
- 坑
- 吾
- 淋
- 扩
- 愤
- 潍
- 尺
- 耗
- 鉴
- 闽
- 乙
- 渭
- 触
- 撑
- 咸
- 灿
- 缩
- 蔬
- 凑
- 渡
- 梭
- 粗
- 袁
- 菌
- 妓
- 稍
- 辐
- 哀
- 浆
- 厢
- 荆
- 踪
- 桐
- 邢
- 蜡
- 奉
- 淑
- 洒
- 扁
- 蕾
- 燥
- 硕
- 牢
- 蛙
- 仍
- 侵
- 稀
- 芒
- 吕
- 跪
- 绪
- 誓
- 旭
- 阁
- 屌
- 凭
- 裹
- 崇
- 纬
- 援
- 怨
- 茄
- 埋
- 棋
- 誉
- 瑜
- 蹲
- 扯
- 跃
- 昧
- 螺
- 毅
- 叮
- 喷
- 壶
- 喉
- 脆
- 瓦
- 碧
- 奴
- 煤
- 伍
- 娶
- 雁
- 骄
- 泣
- 眷
- 屯
- 赏
- 覆
- 揍
- 绯
- 逸
- 屎
- 彦
- 辨
- 攀
- 涉
- 泥
- 廊
- 菱
- 薛
- 衍
- 荒
- 铭
- 沂
- 麟
- 咏
- 扑
- 祈
- 喔
- 磁
- 歇
- 栋
- 沫
- 漏
- 玻
- 璃
- 逝
- 葵
- 溃
- 堆
- 锐
- 楠
- 毫
- 谋
- 勾
- 梯
- 氯
- 杏
- 赌
- 鑫
- 崔
- 颠
- 邱
- 肪
- 掘
- 昭
- 悬
- 奈
- 筷
- 轨
- 诵
- 葫
- 挡
- 梨
- 缠
- 僧
- 抬
- 邻
- 栏
- 饶
- 庚
- 灌
- 呦
- 摊
- 狄
- 汕
- 缴
- 罢
- 瞌
- 腺
- 辖
- 摔
- 棵
- 弗
- 琼
- 揭
- 淀
- 仑
- 粮
- 扔
- 剂
- 邵
- 辅
- 悍
- 袖
- 侨
- 巡
- 仗
- 逢
- 挥
- 翘
- 柱
- 狸
- 赫
- 耽
- 押
- 昂
- 瘤
- 枣
- 癌
- 伏
- 秤
- 脉
- 穹
- 敲
- 贪
- 促
- 拆
- 勉
- 祷
- 弊
- 膏
- 禾
- 契
- 挨
- 纵
- 疲
- 蜘
- 蛛
- 冈
- 雾
- 娄
- 甫
- 裂
- 侦
- 愈
- 臂
- 甩
- 戈
- 钙
- 簿
- 淄
- 蓬
- 夷
- 汁
- 凶
- 匹
- 皆
- 凝
- 仰
- 叛
- 蒲
- 谣
- 砖
- 呈
- 浅
- 瞬
- 丞
- 粘
- 痕
- 癫
- 禺
- 靴
- 尝
- 枫
- 鹅
- 衷
- 暮
- 媚
- 堪
- 臣
- 瑟
- 榕
- 蘑
- 遂
- 舌
- 藤
- 遭
- 芭
- 暧
- 犹
- 砸
- 浇
- 晰
- 矮
- 禹
- 隶
- 蚊
- 塌
- 峪
- 渊
- 摘
- 崩
- 瞧
- 炭
- 瑶
- 纷
- 毁
- 瞒
- 橙
- 渣
- 霹
- 雳
- 粽
- 侧
- 胀
- 捐
- 栈
- 颈
- 伪
- 役
- 予
- 钝
- 菏
- 铠
- 稻
- 赠
- 芽
- 龚
- 幅
- 莓
- 轿
- 炖
- 炬
- 溢
- 扭
- 垂
- 坎
- 嚏
- 枯
- 绣
- 蒸
- 旬
- 迫
- 浒
- 肇
- 庸
- 蒂
- 踏
- 雯
- 埃
- 础
- 狙
- 陷
- 伽
- 滔
- 沦
- 祭
- 唠
- 瀑
- 矛
- 乒
- 乓
- 窍
- 渠
- 泛
- 陇
- 蒜
- 捉
- 扶
- 诀
- 纹
- 踢
- 馋
- 薪
- 坪
- 廉
- 荔
- 骏
- 颁
- 伸
- 贞
- 沾
- 疮
- 兮
- 擎
- 驱
- 馒
- 挖
- 韵
- 姬
- 砍
- 矫
- 巫
- 疙
- 瘩
- 峨
- 抄
- 函
- 歪
- 倚
- 昔
- 涕
- 憨
- 淇
- 宴
- 埠
- 渐
- 胳
- 膊
- 趁
- 擅
- 刑
- 渝
- 噬
- 斋
- 妍
- 债
- 邹
- 嫂
- 娥
- 践
- 禅
- 牲
- 帽
- 吨
- 腻
- 掖
- 榴
- 啸
- 纺
- 鞭
- 豚
- 爵
- 蹄
- 咙
- 澈
- 疹
- 氛
- 抑
- 绸
- 抹
- 奎
- 酬
- 坟
- 诶
- 勋
- 卑
- 沪
- 蚁
- 揉
- 锄
- 泌
- 槽
- 镖
- 卿
- 甸
- 帕
- 镁
- 盲
- 汾
- 携
- 宰
- 虞
- 瓣
- 辩
- 豌
- 樟
- 璐
- 沁
- 钦
- 蔚
- 彬
- 卦
- 轰
- 锈
- 茎
- 蹦
- 拐
- 坝
- 饥
- 捏
- 碑
- 嗓
- 澄
- 惨
- 沽
- 鄂
- 逻
- 谍
- 屿
- 聋
- 憋
- 泼
- 枕
- 盆
- 衫
- 慎
- 黛
- 轶
- 咽
- 匠
- 蚂
- 捶
- 脊
- 蚌
- 剥
- 穆
- 喇
- 叭
- 凳
- 滥
- 撤
- 蓑
- 笠
- 黔
- 诡
- 颐
- 闵
- 稚
- 茨
- 捆
- 芯
- 涩
- 哑
- 盈
- 衰
- 奢
- 贩
- 循
- 韭
- 绘
- 鸳
- 唇
- 恳
- 妥
- 杠
- 刊
- 戚
- 巩
- 胁
- 蜗
- 筝
- 漆
- 劈
- 泄
- 噩
- 椎
- 渔
- 氨
- 橘
- 仲
- 洱
- 绥
- 仿
- 耿
- 蚕
- 倦
- 葬
- 捞
- 拓
- 冤
- 御
- 忌
- 慨
- 弥
- 寡
- 昵
- 撕
- 鲤
- 隧
- 倡
- 臀
- 毙
- 蕊
- 甚
- 睹
- 哒
- 仇
- 栓
- 抒
- 滁
- 讶
- 皱
- 剖
- 闸
- 耻
- 顽
- 茅
- 碱
- 霏
- 坠
- 邑
- 嗦
- 缝
- 枚
- 垫
- 畜
- 侄
- 悴
- 庞
- 鸯
- 俏
- 铅
- 衔
- 浑
- 抖
- 逮
- 犀
- 滕
- 遮
- 淹
- 挪
- 柠
- 檬
- 荨
- 沛
- 喻
- 尹
- 抉
- 爪
- 甄
- 冀
- 蝉
- 汰
- 丧
- 愧
- 畏
- 屑
- 屉
- 娩
- 艰
- 弓
- 炜
- 框
- 娅
- 酵
- 掩
- 宪
- 枉
- 淫
- 糗
- 奸
- 岚
- 诅
- 釜
- 萱
- 窦
- 喆
- 浣
- 庐
- 阑
- 劣
- 窄
- 赈
- 茉
- 帜
- 缸
- 嫩
- 迦
- 憔
- 鸽
- 朴
- 洽
- 榆
- 烹
- 箫
- 荚
- 箍
- 稣
- 肢
- 磷
- 袭
- 橡
- 鸦
- 瞅
- 匡
- 禧
- 痣
- 勃
- 翡
- 篱
- 烽
- 衢
- 讪
- 烛
- 宥
- 铝
- 镯
- 钉
- 披
- 昼
- 跆
- 笈
- 喘
- 惫
- 唧
- 螂
- 涌
- 揣
- 旨
- 袄
- 笼
- 蛔
- 毯
- 凸
- 倪
- 碌
- 懈
- 履
- 鱿
- 菩
- 汝
- 赴
- 焉
- 钛
- 畔
- 掰
- 骆
- 崖
- 髓
- 彪
- 啰
- 撸
- 拌
- 漯
- 犒
- 蔽
- 漱
- 赐
- 饪
- 玖
- 弘
- 卵
- 沭
- 梓
- 禄
- 晖
- 籁
- 熏
- 祠
- 荟
- 伐
- 柄
- 昕
- 琶
- 鞠
- 豹
- 萎
- 裕
- 曰
- 苇
- 沌
- 牺
- 轴
- 薯
- 潞
- 痫
- 曦
- 腋
- 坞
- 隙
- 妊
- 娠
- 蝙
- 蝠
- 赘
- 咧
- 翩
- 棚
- 冕
- 旱
- 棱
- 巍
- 偕
- 杉
- 梵
- 嫦
- 煎
- 泊
- 辟
- 丛
- 艘
- 懦
- 郫
- 搅
- 佬
- 阖
- 焰
- 澜
- 琢
- 挚
- 嫣
- 啧
- 兜
- 趴
- 皂
- 窃
- 嘟
- 崛
- 睿
- 刃
- 绳
- 哗
- 窟
- 嗑
- 吭
- 朔
- 喵
- 粹
- 酶
- 辜
- 诫
- 筹
- 亩
- 椅
- 佐
- 俑
- 狡
- 陛
- 曙
- 攒
- 诈
- 叙
- 杖
- 馅
- 锌
- 矜
- 绮
- 刁
- 阙
- 亢
- 讼
- 驼
- 晃
- 逍
- 仕
- 芋
- 拇
- 掏
- 瘾
- 腕
- 魁
- 鲍
- 殷
- 荤
- 亨
- 凄
- 硝
- 嬛
- 藻
- 诣
- 桔
- 疡
- 氰
- 佰
- 鸠
- 埔
- 皋
- 谚
- 麒
- 廖
- 鳄
- 蹉
- 阎
- 琦
- 丙
- 烯
- 涮
- 絮
- 潢
- 郴
- 遛
- 琵
- 殿
- 蹭
- 笛
- 钾
- 辙
- 炊
- 廷
- 拦
- 哆
- 逐
- 钞
- 赋
- 孽
- 沸
- 龈
- 雌
- 玟
- 麓
- 焊
- 谨
- 衬
- 灸
- 栖
- 卉
- 脐
- 栽
- 扒
- 酚
- 肱
- 闺
- 猥
- 钩
- 羁
- 吱
- 吼
- 蹊
- 跷
- 磕
- 坷
- 蝇
- 唔
- 褶
- 钮
- 鹭
- 咔
- 沐
- 棠
- 锷
- 滞
- 肛
- 糜
- 噜
- 涧
- 儒
- 琅
- 捎
- 泵
- 葩
- 芥
- 轲
- 猾
- 拱
- 墅
- 蕲
- 馁
- 佚
- 渤
- 崎
- 峻
- 赎
- 霄
- 羯
- 缅
- 韧
- 勘
- 皖
- 顷
- 喀
- 忏
- 圭
- 槟
- 榔
- 兹
- 坂
- 镒
- 堕
- 蟒
- 芹
- 浃
- 哉
- 晏
- 绐
- 陀
- 茵
- 倘
- 缆
- 浊
- 碍
- 惰
- 濮
- 杵
- 削
- 裘
- 嗅
- 呕
- 绊
- 哩
- 腩
- 撇
- 郝
- 铿
- 锵
- 赃
- 缪
- 卤
- 吝
- 涟
- 冶
- 匪
- 婿
- 蛳
- 搏
- 圩
- 旷
- 汞
- 鹦
- 茱
- 粪
- 崂
- 陋
- 掐
- 郡
- 哮
- 邸
- 帘
- 柚
- 鬓
- 剃
- 忻
- 羔
- 聆
- 刹
- 嗷
- 罕
- 沥
- 钗
- 尴
- 尬
- 莽
- 捧
- 拽
- 懵
- 噶
- 虐
- 囚
- 囡
- 颓
- 亥
- 傍
- 疏
- 乞
- 丐
- 皓
- 孜
- 愣
- 檐
- 橱
- 绅
- 噻
- 痊
- 鳞
- 瞳
- 衩
- 捂
- 吔
- 螳
- 暇
- 嘎
- 缤
- 镍
- 吟
- 斥
- 饲
- 鲢
- 猩
- 狒
- 腼
- 腆
- 轼
- 梗
- 熨
- 荫
- 糙
- 妾
- 粕
- 烘
- 壹
- 骥
- 秽
- 熔
- 歹
- 谬
- 侈
- 蜈
- 蚣
- 婵
- 渍
- 斩
- 棕
- 辱
- 醇
- 磅
- 礴
- 颊
- 彝
- 庾
- 叠
- 忒
- 稽
- 幢
- 嘱
- 醛
- 砂
- 炳
- 拂
- 殇
- 邬
- 冥
- 擒
- 汶
- 罐
- 镑
- 祁
- 氮
- 怆
- 羌
- 拧
- 芸
- 堀
- 婊
- 暄
- 挎
- 躬
- 噎
- 菅
- 奂
- 龌
- 龊
- 睬
- 燎
- 鲈
- 拢
- 啬
- 脖
- 尧
- 馗
- 皎
- 滤
- 镶
- 椭
- 狈
- 澎
- 阉
- 侃
- 婕
- 脓
- 桨
- 阪
- 湃
- 溏
- 箕
- 蚯
- 蚓
- 呛
- 矩
- 彤
- 惟
- 鹉
- 讽
- 募
- 惦
- 飓
- 抠
- 肮
- 溟
- 膝
- 芗
- 逞
- 娌
- 湮
- 舵
- 挫
- 椰
- 螃
- 绽
- 蟑
- 聂
- 拘
- 萸
- 洼
- 弛
- 澧
- 玺
- 芊
- 枢
- 鲨
- 毋
- 搂
- 跎
- 趾
- 琐
- 徘
- 徊
- 濡
- 咩
- 钏
- 舔
- 烷
- 胺
- 拙
- 溺
- 竖
- 蕴
- 巅
- 魄
- 吖
- 啵
- 庇
- 灼
- 遣
- 怠
- 枭
- 乏
- 缕
- 掂
- 秩
- 蜕
- 泾
- 汀
- 肆
- 倔
- 吒
- 矣
- 豁
- 仨
- 俯
- 嘲
- 瞪
- 唬
- 骋
- 辍
- 曝
- 泻
- 鼾
- 捣
- 妨
- 撵
- 撮
- 猕
- 浜
- 哺
- 睫
- 荧
- 噪
- 栗
- 垣
- 獒
- 冼
- 瞄
- 刍
- 硅
- 翊
- 泓
- 枥
- 凋
- 匣
- 孢
- 飙
- 俭
- 珑
- 嵊
- 佣
- 祟
- 枞
- 蓟
- 斧
- 镕
- 棺
- 痔
- 娴
- 苔
- 笙
- 蔻
- 芮
- 迭
- 暨
- 诏
- 癜
- 芷
- 臧
- 驿
- 珂
- 藕
- 笋
- 竭
- 歼
- 铉
- 恹
- 雇
- 诲
- 漓
- 扳
- 寰
- 颂
- 缈
- 砣
- 戳
- 疣
- 寮
- 甥
- 牦
- 衅
- 湄
- 汨
- 褐
- 腑
- 啼
- 惭
- 痰
- 梳
- 驮
- 阮
- 壳
- 慷
- 牟
- 捺
- 瘁
- 锂
- 狩
- 沱
- 烁
- 摞
- 楷
- 楞
- 瑾
- 饯
- 灶
- 薰
- 伎
- 忐
- 忑
- 煽
- 骁
- 娲
- 赁
- 锑
- 嵌
- 苞
- 咫
- 锴
- 岐
- 蓓
- 毽
- 黏
- 攸
- 恰
- 惶
- 矶
- 簸
- 坨
- 踝
- 掺
- 榨
- 阀
- 婢
- 纨
- 搓
- 闫
- 瘫
- 垢
- 蚀
- 貂
- 壑
- 婧
- 腥
- 兖
- 觅
- 壤
- 珉
- 胭
- 惧
- 僻
- 峥
- 炀
- 蔗
- 铂
- 宛
- 巳
- 氟
- 秸
- 菁
- 鹃
- 疱
- 矢
- 拭
- 缀
- 朦
- 胧
- 筏
- 贯
- 汐
- 蛤
- 蟆
- 迩
- 犁
- 馈
- 叽
- 喳
- 袈
- 裟
- 啃
- 敞
- 踊
- 雏
- 朽
- 撩
- 恙
- 亵
- 淤
- 垦
- 眺
- 熄
- 衲
- 伺
- 墟
- 孚
- 墩
- 猬
- 堤
- 鞘
- 署
- 陂
- 鬟
- 萤
- 悯
- 恃
- 峙
- 咄
- 奠
- 跺
- 笆
- 啄
- 殆
- 赅
- 锭
- 铛
- 枷
- 姗
- 驭
- 嘀
- 煲
- 腚
- 霖
- 孪
- 翟
- 濒
- 邂
- 逅
- 筱
- 霓
- 窈
- 窕
- 眨
- 耸
- 羚
- 尉
- 谀
- 竿
- 蛟
- 籽
- 铲
- 潼
- 匆
- 肽
- 戬
- 岔
- 奚
- 裴
- 嘏
- 玥
- 妯
- 昙
- 烨
- 吏
- 鼹
- 筵
- 崭
- 涪
- 來
- 瘆
- 彰
- 杞
- 疽
- 琥
- A
- 栾
- 庵
- 窘
- 擀
- 痤
- 蟾
- 唾
- 嚼
- 癖
- 蛹
- 浸
- 狭
- 迂
- 脍
- 炙
- 覃
- 悖
- 阆
- 铸
- 洮
- 瑙
- 呷
- 呸
- 谛
- 膨
- 柑
- 眯
- 奘
- 吆
- 孰
- 珈
- 曜
- 拈
- 麝
- 嘘
- 缚
- 徕
- 糸
- 崴
- 藓
- 婺
- 揽
- 溧
- 熠
- 膳
- 犊
- 贬
- 脯
- 剿
- 鼬
- 焕
- 胛
- 拷
- 勺
- 鲫
- 炅
- 卒
- 刨
- 糯
- 瘪
- 雍
- 襟
- 酋
- 胤
- 戟
- 褔
- 惆
- 怅
- 阂
- 扉
- 锚
- 砌
- 祺
- 淅
- 濠
- 匀
- 隍
- 氦
- 绫
- 濑
- 佝
- 偻
- 翎
- 颌
- 咚
- 疖
- 媲
- 祗
- 寅
- 靡
- 稞
- 骝
- 锏
- 焖
- 栀
- 蝗
- 甭
- 罄
- 酪
- 酮
- 嘢
- 钨
- 涎
- 沼
- 嚯
- 阱
- 驸
- 爰
- 酌
- 绛
- 畴
- 辄
- 藜
- 碚
- 馥
- 茧
- 鲛
- 溅
- 浯
- 沮
- 蹿
- 诠
- 姊
- 藉
- 骡
- 褪
- 酞
- 臻
- 靛
- 譬
- 粼
- 肘
- 孺
- 苟
- 瓯
- 蕨
- 冉
- 稠
- 蒿
- 锤
- 焙
- 蜃
- 淌
- 瘸
- 汲
- 噼
- 啪
- 橇
- 虔
- 裳
- 煞
- 淳
- 锟
- 摧
- 篷
- 癞
- 凹
- 汹
- 樵
- 睐
- 叁
- 飒
- 舶
- 驷
- 嘚
- 垮
- 妩
- 焚
- 扪
- 溥
- 鹊
- 鹄
- 汴
- 妁
- 廓
- 谙
- 苛
- 喏
- 嬉
- 裆
- 谔
- 哝
- 岑
- 喧
- 咆
- 茁
- 霎
- 泷
- 笃
- 沣
- 戮
- 蓦
- 滢
- 碜
- 滇
- 妤
- 盯
- 眶
- 婶
- 侍
- 崽
- 辘
- 轳
- 斓
- 郢
- 泞
- 窖
- 镭
- 痹
- 缉
- 镐
- 膛
- 睦
- 歧
- 扦
- 筛
- 嵘
- 茗
- 戎
- 萦
- 柒
- 咀
- 诋
- 搁
- 婪
- 漾
- 瀚
- 绎
- 盏
- 庹
- 吩
- 咐
- 堇
- 矾
- 茯
- 苓
- 潦
- 嘁
- 噫
- 窑
- 鳗
- 孵
- 彷
- 徨
- 耕
- 晗
- 撂
- 猿
- 昊
- 淼
- 驯
- 垒
- 铤
- 胱
- 桦
- 铮
- 坳
- 厥
- 叨
- 烙
- 苷
- 殴
- 鸥
- 蜥
- 蜴
- 湟
- 衙
- 敖
- 阐
- 穗
- 攥
- 俾
- 锥
- 粱
- 绰
- 漕
- 钕
- 硼
- 蚤
- 铢
- 疚
- 挟
- 昱
- 栅
- 煦
- 鳝
- 枸
- 锯
- 茜
- 悼
- 跤
- 犍
- 衿
- 筐
- 恪
- 琛
- 砝
- 秆
- 歆
- 晾
- 慑
- 蜍
- 诃
- 盔
- 寇
- 璧
- 鹩
- 恤
- 匿
- 踉
- 焗
- 戍
- 憎
- 桓
- 裔
- 梢
- 蝼
- 贿
- 诽
- 橄
- 榄
- 蔺
- 鲅
- 鳖
- 荞
- 槐
- 砚
- 癣
- 胚
- 沅
- 菀
- 荀
- 亳
- 铵
- 垌
- 釉
- 摁
- 瑕
- 疵
- 泗
- 逵
- 饵
- 旌
- 磺
- 彗
- 娣
- 晟
- 惘
- 棘
- 屹
- 逾
- 淞
- 逑
- 茴
- 楹
- 珀
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 16
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 133 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: model_TrainTestSplit_berturk_v2_24Feb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_TrainTestSplit_berturk_v2_24Feb
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
- Precision: 0.9999
- Recall: 0.9999
- F1: 0.9999
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 196 | 0.0058 | 0.9982 | 0.9980 | 0.9981 | 0.9986 |
| No log | 2.0 | 392 | 0.0042 | 0.9987 | 0.9986 | 0.9986 | 0.9990 |
| 0.0132 | 3.0 | 588 | 0.0042 | 0.9985 | 0.9988 | 0.9986 | 0.9990 |
| 0.0132 | 4.0 | 784 | 0.0022 | 0.9993 | 0.9992 | 0.9992 | 0.9993 |
| 0.0132 | 5.0 | 980 | 0.0020 | 0.9993 | 0.9992 | 0.9993 | 0.9995 |
| 0.0069 | 6.0 | 1176 | 0.0013 | 0.9994 | 0.9994 | 0.9994 | 0.9995 |
| 0.0069 | 7.0 | 1372 | 0.0008 | 0.9997 | 0.9997 | 0.9997 | 0.9998 |
| 0.0035 | 8.0 | 1568 | 0.0008 | 0.9997 | 0.9997 | 0.9997 | 0.9998 |
| 0.0035 | 9.0 | 1764 | 0.0006 | 0.9996 | 0.9997 | 0.9996 | 0.9997 |
| 0.0035 | 10.0 | 1960 | 0.0004 | 0.9998 | 0.9999 | 0.9998 | 0.9999 |
| 0.0019 | 11.0 | 2156 | 0.0003 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0019 | 12.0 | 2352 | 0.0003 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0012 | 13.0 | 2548 | 0.0004 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0012 | 14.0 | 2744 | 0.0003 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0012 | 15.0 | 2940 | 0.0003 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.19 +/- 25.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-40
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9062 | 1.0 | 1 | 5.6418 |
| 4.5785 | 2.0 | 2 | 5.3839 |
| 6.6562 | 3.0 | 3 | 5.0883 |
| 4.0274 | 4.0 | 4 | 4.4272 |
| 2.9225 | 5.0 | 5 | 4.1994 |
| 1.9388 | 6.0 | 6 | 2.9638 |
| 2.6745 | 7.0 | 7 | 2.4477 |
| 2.0988 | 8.0 | 8 | 2.4030 |
| 2.3506 | 9.0 | 9 | 3.5475 |
| 1.734 | 10.0 | 10 | 0.1426 |
| 1.8435 | 11.0 | 11 | 2.2994 |
| 1.5274 | 12.0 | 12 | 1.5195 |
| 1.5668 | 13.0 | 13 | 1.3508 |
| 1.4771 | 14.0 | 14 | 1.5684 |
| 1.4649 | 15.0 | 15 | 0.0011 |
| 1.0896 | 16.0 | 16 | 2.2005 |
| 0.9002 | 17.0 | 17 | 0.0748 |
| 1.2433 | 18.0 | 18 | 0.4664 |
| 1.4224 | 19.0 | 19 | 1.5759 |
| 0.791 | 20.0 | 20 | 0.4863 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-finetuned-seinfeld
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-finetuned-seinfeld
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5154 | 0.99 | 26 | 3.4073 |
| 3.3109 | 1.99 | 52 | 3.2356 |
| 3.1383 | 2.99 | 78 | 3.1584 |
| 3.0213 | 3.99 | 104 | 3.1206 |
| 2.9253 | 4.99 | 130 | 3.1032 |
| 2.8361 | 5.99 | 156 | 3.0963 |
| 2.7517 | 6.99 | 182 | 3.1016 |
| 2.6606 | 7.99 | 208 | 3.1131 |
| 2.5651 | 8.99 | 234 | 3.1442 |
| 2.4641 | 9.99 | 260 | 3.1742 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CAUKiel/JavaBERT
|
[
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 388 | null |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: finetune_teacher_clean_mozilla_200_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_teacher_clean_mozilla_200_epochs
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 51.1994
- Wer: 0.2767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 192.4936 | 29.41 | 1000 | 31.6902 | 0.3122 |
| 123.9408 | 58.82 | 2000 | 36.2166 | 0.3028 |
| 87.1469 | 88.23 | 3000 | 43.5998 | 0.3144 |
| 62.0674 | 117.64 | 4000 | 44.5869 | 0.2944 |
| 44.2649 | 147.06 | 5000 | 47.9859 | 0.2825 |
| 33.7306 | 176.47 | 6000 | 51.1994 | 0.2767 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CLAck/vi-en
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-02-24T05:49:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: sanskritikhare142/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sanskritikhare142/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5633
- Validation Loss: 1.7706
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4834 | 2.1551 | 0 |
| 1.8264 | 1.7706 | 1 |
| 1.5633 | 1.7706 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CLEE/CLEE
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: DesignOrder/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-02-24T09:50:44Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wafyru/1677232609181/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1609988239183728642/2QQ6lp1v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wafer</div>
<div style="text-align: center; font-size: 14px;">@wafyru</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wafer.
| Data | Wafer |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 142 |
| Short tweets | 1260 |
| Tweets kept | 1828 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5qbo0j7d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wafyru's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vpzkxoj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vpzkxoj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wafyru')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- NLP-MINI-PROJECT/rabbi_kook
metrics:
- rouge
model-index:
- name: kook-model-output-dir-2
results:
- task:
name: Summarization
type: summarization
dataset:
name: NLP-MINI-PROJECT/rabbi_kook
type: NLP-MINI-PROJECT/rabbi_kook
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-rabbi-kook
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the NLP-MINI-PROJECT/rabbi_kook dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7677
- Gen Len: 115.8184
## Model description
Summarization model of fine-tuned mt5-small with Rabbi-Kook paragraphs and summaries.
## Intended uses & limitations
Summarization of Rabbi-Kook style paragraphs.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36 | 2023-02-24T10:00:09Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Ralist Dreambooth model trained by Jokinglemon007 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc
|
[
"pytorch",
"distilbert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- shrinath-suresh/qa-10k
model-index:
- name: bart-qa10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qa10k
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the shrinath-suresh/qa-10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.13.2
|
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli
|
[
"pytorch",
"distilbert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | 2023-02-24T10:11:17Z |
---
language:
- tr
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: base Turkish Whisper (bTW)
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base Turkish Whisper (bTW)
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ermetal Meetings dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1975
- Wer: 1.6817
- Cer: 1.2800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 1.5514 | 33.31 | 100 | 1.6389 | 0.8196 | 0.8754 |
| 0.1703 | 66.62 | 200 | 1.6896 | 1.0058 | 0.6987 |
| 0.0039 | 99.92 | 300 | 1.9380 | 1.7011 | 1.1631 |
| 0.0015 | 133.31 | 400 | 2.0324 | 1.6950 | 1.2498 |
| 0.0008 | 166.62 | 500 | 2.0957 | 1.4898 | 1.0992 |
| 0.0005 | 199.92 | 600 | 2.1417 | 1.7320 | 1.2528 |
| 0.0004 | 233.31 | 700 | 2.1681 | 1.6077 | 1.1845 |
| 0.0003 | 266.62 | 800 | 2.1847 | 1.625 | 1.2008 |
| 0.0003 | 299.92 | 900 | 2.1944 | 1.6515 | 1.2196 |
| 0.0003 | 333.31 | 1000 | 2.1975 | 1.6817 | 1.2800 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dccuchile/distilbert-base-spanish-uncased
|
[
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 670 | 2023-02-24T10:11:48Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: whisper-base-ft-common-language-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-ft-common-language-id
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
- Accuracy: 0.7525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5291 | 1.0 | 694 | 2.4787 | 0.4806 |
| 1.5801 | 2.0 | 1388 | 1.6258 | 0.6260 |
| 1.0144 | 3.0 | 2082 | 1.2886 | 0.6816 |
| 0.7442 | 4.0 | 2776 | 1.0783 | 0.7237 |
| 0.4802 | 5.0 | 3470 | 1.0582 | 0.7266 |
| 0.3378 | 6.0 | 4164 | 1.0173 | 0.7417 |
| 0.1941 | 7.0 | 4858 | 1.0054 | 0.7446 |
| 0.1424 | 8.0 | 5552 | 1.0213 | 0.7508 |
| 0.1242 | 9.0 | 6246 | 1.0567 | 0.7495 |
| 0.1527 | 10.0 | 6940 | 1.0725 | 0.7525 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Chaewon/mmnt_decoder_en
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-02-24T10:28:41Z |
This is a dataset containing the JOB-light workload along with the associated ground truth cardinality on the IMDB dataset for each query.
JOB-light is a workload derived from the Join Order Benchmark (JOB) containing 70 queries, which does not contain any predicates on strings nor disjunctions and limits to four joins at most.
|
Chaewon/mnmt_decoder_en
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AigizK/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Chaewon/mnmt_decoder_en_gpt2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-24T10:35:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.63 +/- 4.53
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dbaibak/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Chaima/TunBerto
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-24T10:38:43Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Leonhard17/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ChaitanyaU/FineTuneLM
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tmvar-bert-base-cased-finetuned-24-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmvar-bert-base-cased-finetuned-24-02
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Chakita/KNUBert
|
[
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# OpenVINO Stable Diffusion
This repository contains the models from [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) from RunwayML converted to
OpenVINO, for accelerated inference on CPU with OpenVINO's integration into Optimum:
[optimum-intel](https://github.com/huggingface/optimum-intel#openvino). Please check out the [source model
repository](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more information about the model and its license.
To install the requirements for this demo, do `pip install optimum[openvino]`. This installs all the necessary dependencies,
including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide).
The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the
model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image.
```
from optimum.intel.openvino import OVStableDiffusionPipeline
stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/runwayml-stable-diffusion-v1-5-ov-fp32")
images = stable_diffusion("sailing ship in storm by Leonardo da Vinci").images
```
The following example code uses static shapes for even faster inference. Using larger image sizes will
require more memory and take longer to generate.
If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel
discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below.
Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable
diffusion only static shapes are supported at the moment.
```python
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
batch_size = 1
num_images_per_prompt = 1
height = 256
width = 256
# load the model and reshape to static shapes for faster inference
model_id = "helenai/runwayml-stable-diffusion-v1-5-ov-fp32"
stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False)
stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt)
stable_diffusion.compile()
# generate image!
prompt = "sailing ship in storm by Leonardo da Vinci"
images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images
images[0].save("result.png")
```
|
Chakita/KROBERT
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-pokemons-256_500_epochs
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 3
- eval_batch_size: 10
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-pokemons-256_500_epochs/tensorboard?#scalars)
|
Chakita/KannadaBERT
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- anime
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
text: meitu
---
[](https://huggingface.co/spaces/Duskfallcrew/Animated_Dreams)
### Animated Dreams Dreambooth model trained by Duskfallcrew with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
# Coffee is nice:
https://ko-fi.com/DUSKFALLcrew
Concept tag: Meitu
# Model Updates on CivIt:
https://civitai.com/user/duskfallcrew
# Sample Images Are available here



# More sample images will be added to the folder with text files here:
https://huggingface.co/Duskfallcrew/animutest/tree/main/Concept_Stuff
|
Chun/w-zh2en-mto
|
[
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-02-24T12:21:33Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: conversationv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conversationv8
This model is a fine-tuned version of [gorkemgoknar/gpt2chatbotenglish](https://huggingface.co/gorkemgoknar/gpt2chatbotenglish) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Culmenus/XLMR-ENIS-finetuned-ner
|
[
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: Ping hair
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - bo
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "Ping hair " using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: Ping




|
DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 167 | 2023-02-24T17:18:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BarefootBayes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.