repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
spatial/Reinforce-CartPole8
|
spatial
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Arron/bert-finetuned-ner
|
Arron
|
bert
| 22 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,512 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9319
- Recall: 0.9495
- F1: 0.9406
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0845 | 1.0 | 1756 | 0.0626 | 0.9146 | 0.9337 | 0.9241 | 0.9827 |
| 0.0414 | 2.0 | 3512 | 0.0561 | 0.9321 | 0.9492 | 0.9405 | 0.9861 |
| 0.0198 | 3.0 | 5268 | 0.0607 | 0.9319 | 0.9495 | 0.9406 | 0.9860 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Mykolyt/q-Taxi-v3
|
Mykolyt
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 363 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mykolyt/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
morisato/scenery_LoRA_01
|
morisato
| null | 5 | 0 | null | 3 | null | false | false | false |
unknown
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,548 |
# 背景LoRA
既存ののモデルのプロンプトで生成できない場所や風景を学習で生成できるようになったらいいなという実験です。
# karaokeroom.safetensors
[<img width="480" src="https://i.imgur.com/hclI0vj.jpg">](https://i.imgur.com/hclI0vj.jpg)
[<img width="480" src="https://i.imgur.com/8H3c7eE.jpg">](https://i.imgur.com/8H3c7eE.jpg)
カラオケ屋さんの部屋の雰囲気を学習したLoRAです。
Loraを読み込ませて、プロンプトに **karaokeroom** と記述してください。
プロンプトに、1girl, karaoke, microphone, 等とあわせて記述していただくとカラオケを歌ってる感じの絵ができます。
※当LoRAを適用すると人物の描画や画風に影響が生じるようです。LoRAを適用するWeightを調整することで画風への影響を抑えられます。
影響が気になった場合は \<lora:karaokeroom:1\>ではなく\<lora:karaokeroom:0.6\>といった感じで調整して使ってみてください。
※karaokeroom, 1girl, karaoke, 等のプロンプトを書いても、部屋の風景のみで人物がうまく描画されないことがあります。
ガチャ要素があるのと、モデルによってはうまく働かない場合があるようです。その場合は根気よく何枚か生成してみるか、違うモデルを使ってみてください。
## この実験をやってみた動機
たとえばプロンプトに shibuya,city, と書くと渋谷っぽい風景の絵を描いてくれます。これはモデルが「渋谷」という概念を知ってるという事だと思います。
しかし、 Nishinomiya と書いても西宮っぽい風景の絵を描いてはくれません。これはモデルが「西宮」という概念を知らないという事だと思います。
最近、LoRAという手法でスペックが低いパソコン(GPU)でも追加学習が出来る方法が普及してきました。既存のモデルでは描けないキャラクターや衣装等を学習させている方がたくさんいらっしゃいます。
そこで自分は、風景の写真を何枚か学習させれば、その「場所」の概念を学習してくれるのではないかと考えました。
カラオケ店の部屋の写真を20枚程用意して、WD14-taggerでタグ付けを行いました。出来たtxtファイルの全ての先頭の位置に karaokeroom, という単語を追加しました。
学習前提のモデルは karaokeroom という概念を知らないので、この学習によって karaokeroom という新しい概念を獲得してくれると想定しました。
実際に学習を実行して出来上がった当LoRAを読み込ませると上記の karaokeroom というプロンプトでカラオケ店の部屋っぽい絵が生成できます。
場所の概念を学習させる実験は成功ではないでしょうか?
LoRAでうまく場所の概念を学習できる方法が確立できれば日本の様々な風景を学習させることで身近な場所のイラストが生成できるようになると思います。これはその第一歩です。
## 問題点、今後の課題
カラオケ店の風景は再現できるようになりました。が、当LoRAを適用してカラオケを歌う女の子の絵を生成すると、人物の描画や画風に影響が生じる場合があります。
これはおそらく場所の概念だけではなく、素材写真の画風等も学習してしまったものだと思います。
現状はLoraを適用するWeightを下げることで影響を軽減できますが、学習方法やLoRAの適用の仕方で影響を軽減することが出来ないか?と考えています。
・U-net層でWeight調整することで影響を押さえられる?
実は僕も全然よく分かってないのですが(!)階層マージ(Marge Block Weighted)で多くの方が様々なモデルマージに挑戦した結果、絵を生成するU-netの各レイヤー層を調節することで描画に様々な調整ができる(?)ことがわかってきました。
例えば「INの上層はリアル調、INの下層がanime調を担当しているのではないか?」、「M_00は全体にキャラクターや服装・背景等に大きな影響が出る」、「OUT上層は、主題以外の表現 (例えば背景)に影響を及ぼしている」、「OUT04,OUT05,OUT06あたりはめっちゃ顔に影響ある」等色々な説があります。
上の方で書いたkaraokeroom:0.6といった指定はU-net全体まるごとで影響を下げる設定になると思うのですが(多分)もし背景に大きく関与しているU-net層が分かればそれ以外のU-net層への関与を抑えることで既存モデルの人物描写と追加学習背景LoRAがうまく共存できるのでは?と考えられます。
実際にU-net層別にWeightを調節できるScripts(sd-webui-lora-block-weight:https://github.com/hako-mikan/sd-webui-lora-block-weight )を使って色々な数値を調整したXY Plot画像等を作成してみたりして調べていますが、現状では「ワイには何もわからないことがわかった」という感じではっきりしたことは分かっていません。
・学習用素材の写真をうまく調整する
・学習時のキャプション(タグ)の付け方などで、画風を学ばないようにできないか?
・正則化画像を用意することで何かうまく学習の調整ができるのでは?
等、いろいろな案が考えられると思いますが…まだまだ試行錯誤の段階で情報が足りず良い解決方法は得られていません。
なかなか難しそうですが、うまく場所の概念だけ覚えさせる方法が出来たらいいですよね。
|
aisingapore/undersupervised-feature-decomposition
|
aisingapore
| null | 10 | 0 | null | 0 |
text-classification
| false | false | false |
gpl-3.0
|
['en', 'de', 'fr', 'ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ufd', 'text-classification', 'undersupervised-feature-decomposition']
| false | true | true | 3,920 |
# Cross Lingual Cross Domain
You can **try out the model** at [SGNLP](https://sgnlp.aisingapore.net/cross-lingual-cross-domain).<br />
If you want to find out more information, please contact us at [SGNLP-AISingapore]([email protected]).
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Model Parameters](#model-parameters)
- [License](#license)
## Model Details
**Model Name:** Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
- **Description:** It is an implementation of Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model paper.
- **Paper:** Unsupervised domain adaptation of a pretrained cross-lingual language model. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Nov, 2020 (pp. 3672-3678).
- **Author(s):** Li, J., He, R., Ye, H., Ng, H. T., Bing, L., & Yan, R. (2020).
- **URL:** https://www.ijcai.org/Proceedings/2020/508
# How to Get Started With the Model
## Install Python package
SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br>
Various NLP models, other than cross lingual cross domain are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp).
```python
pip install sgnlp
```
## Examples
For more full code guide, please refer to this [documentation](https://sgnlp.aisingapore.net/docs/model/ufd.html). <br> Alternatively, you can also try out the [demo](https://sgnlp.aisingapore.net/cross-lingual-cross-domain) for Cross Lingual Cross Domain.
Example of Undersupervised Feature Decomposition (UFD) model (German language):
```python
from sgnlp.models.ufd import UFDModelBuilder, UFDPreprocessor
# Instantiate model builder and preprocessor
model_builder = UFDModelBuilder(
source_domains=['books'],
target_languages=['de'],
target_domains=['dvd'])
preprocessor = UFDPreprocessor()
# Build pretrained model groups
model_groups = model_builder.build_model_group()
# Model predict ('books_de_dvd' model example)
instance = """Wolverine is BACK Der Film ist im Grunde wie alle Teile der X-Men für Comic-Fans auf jeden Fall ein muss.
Hugh Jackman spielt seine Rolle wie immer so gut was ich von den ein oder anderen Darsteller leider nicht
sagen kann. Story und Action sind aber genug Gründe um sich die Blu-ray zu kaufen."""
instance_features = preprocessor([instance])
output = model_groups['books_de_dvd'](**instance_features)
```
# Training
The training datasets can be retrieved from the following author's repository([github](https://github.com/lijuntaopku/UFD/tree/main/data)).
#### Training Results - For UFD
- **Training Time: (Unsupervised training)** ~3 hours for 30 epochs on a single V100 GPU
- **Training Time: (Supervised training)** ~3 hours for 60 epochs on a single V100 GPU
# Model Parameters
- **Model Weights:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html)
- **Model Config:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html)
- **Model Inputs:** Raw text.
- **Model Outputs:** Array of logits with the size of number of classes.
- **Model Size:** XLM-Roberta: ~2.2GB, Adaptor Domain: ~8.0MB, Adaptor Global: ~8.0MB, Feature Mapper: ~8.0MB, Classifier: ~9.1KB.
- **Model Inference Info:** ~2 sec on Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz.
- **Usage Scenarios:** Sentiment analysis for eCommerce with operations across multiple countries.
# License
- **For non-commercial use:** GNU GPLv3.
- **For commercial use:** please contact us [SGNLP-AISingapore]([email protected])
|
EvaKlimentova/knots_protbertBFD_alphafold
|
EvaKlimentova
|
bert
| 8 | 6 |
transformers
| 0 |
text-classification
| true | false | false | null | null |
['EvaKlimentova/knots_AF']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,657 |
# M1 - finetuned ProtBert-BFD
The model is trained on [knots_AF dataset](https://huggingface.co/datasets/EvaKlimentova/knots_AF)
The accuracy on the test set is ~ 0.9848
| M1 ProtBert BFD | Dataset size | Unknotted set size | Accuracy | TPR | TNR |
|:----------------------------:|:------------:|:------------------:|:--------:|:------:|:-------:|
| All | 39412 | 19718 | 0.9848 | 0.9871 | 0.9826 |
| SPOUT | 7371 | 550 | 0.9905 | 0.9963 | 0.9182 |
| TDD | 612 | 24 | 0.9918 | 0.9966 | 0.8750 |
| DUF | 736 | 429 | 0.97905 | 0.9826 | 0.9767 |
| AdoMet synthase | 1794 | 240 | 0.9939 | 0.9968 | 0.9750 |
| Carbonic anhydrase | 1531 | 539 | 0.9556 | 0.9718 | 0.9258 |
| UCH | 477 | 125 | 0.9099 | 0.9631 | 0.7600 |
| ATCase/OTCase | 3799 | 3352 | 0.9992 | 0.9955 | 0.9997 |
| ribosomal-mitochondrial | 147 | 41 | 0.8912 | 0.9906 | 0.63412 |
| membrane | 8309 | 1577 | 0.9791 | 0.9895 | 0.9347 |
| VIT | 14347 | 12639 | 0.9873 | 0.9415 | 0.9935 |
| biosynthesis of lantibiotics | 392 | 286 | 0.9719 | 0.9811 | 0.9685 |
| PGluconate dehydrogenase | 1 | 0 | 1.0 | 1.0 | |
|
deprem-ml/deprem-ner-mdebertav3
|
deprem-ml
|
deberta-v2
| 14 | 16 |
transformers
| 3 |
token-classification
| true | false | false |
apache-2.0
|
['tr']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,580 |
# Model: 'deprem-ner-mdebertav3'
### Validasyon Sonuçları
- **Precision:** 0.711819
- **Recall:** 0.783626
- **F1:** 0.745999
- **Accuracy:** 0.933360
### Eğitim Parametreleri
```
evaluation_strategy="epoch"
save_strategy="epoch"
load_best_model_at_end=True
learning_rate=3e-5
per_device_train_batch_size=8
per_device_eval_batch_size=16
num_train_epochs=15
weight_decay=0.01
seed=42
```
### Örnekler
Bu model depremde enkaz altında kalan kişilerin bildirimlerinden sokak, il, ilçe gibi bilgileri çekmeye çalıştık.
Örnek girdiler:
- "Lütfen yardım Akevler mahallesi Rüzgar sokak Tuncay apartmanı zemin kat Antakya akrabalarım göçük altında #hatay #Afad"
- "MARAȘA'ta arkadaşimizdan haber alamıyoruz ACIL yardım Penta Park konutları 1. Blok en üst kat 11. Kat \n\n@AFADBaskanlik #kahramanmaraş\nACİL"
Verdiği çıktılar:
```
[
{
"entity_group": "mahalle",
"score": 0.8160411715507507,
"word": "Akevler mahallesi",
"start": 14,
"end": 31
},
{
"entity_group": "sokak",
"score": 0.940501868724823,
"word": "Rüzgar sokak",
"start": 32,
"end": 44
},
{
"entity_group": "Apartman/Site",
"score": 0.8081040978431702,
"word": "Tuncay apartmanı",
"start": 45,
"end": 61
},
{
"entity_group": "ilce",
"score": 0.854024350643158,
"word": "Antakya",
"start": 72,
"end": 79
}
]
```
### Değerlendirme
Bu modeli Hugging Face Hub'daki diğer modellerle karşılaştırdık, örnek 30 input'ta sonuçları [bu repository'de](https://huggingface.co/datasets/deprem-ml/butun_model_benchmarklari) bulabilirsiniz.
|
minoosh/finetuned_bert-base-uncased
|
minoosh
|
bert
| 18 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,405 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8732
- Accuracy: 0.4263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7365 | 1.0 | 502 | 1.5167 | 0.4288 |
| 1.3495 | 2.0 | 1004 | 1.4797 | 0.4592 |
| 1.1131 | 3.0 | 1506 | 1.5093 | 0.4527 |
| 0.9213 | 4.0 | 2008 | 1.6501 | 0.4522 |
| 0.7787 | 5.0 | 2510 | 1.7494 | 0.4407 |
| 0.6594 | 6.0 | 3012 | 1.8600 | 0.4417 |
| 0.5807 | 7.0 | 3514 | 1.9974 | 0.4412 |
| 0.5142 | 8.0 | 4016 | 2.0887 | 0.4273 |
| 0.4716 | 9.0 | 4518 | 2.1556 | 0.4273 |
| 0.4364 | 10.0 | 5020 | 2.2847 | 0.4348 |
| 0.3934 | 11.0 | 5522 | 2.3842 | 0.4298 |
| 0.3774 | 12.0 | 6024 | 2.4663 | 0.4228 |
| 0.3498 | 13.0 | 6526 | 2.5637 | 0.4253 |
| 0.337 | 14.0 | 7028 | 2.6162 | 0.4273 |
| 0.3191 | 15.0 | 7530 | 2.6466 | 0.4268 |
| 0.3081 | 16.0 | 8032 | 2.6214 | 0.4288 |
| 0.2889 | 17.0 | 8534 | 2.8064 | 0.4258 |
| 0.2831 | 18.0 | 9036 | 2.8042 | 0.4228 |
| 0.2733 | 19.0 | 9538 | 2.8510 | 0.4288 |
| 0.2648 | 20.0 | 10040 | 2.8732 | 0.4263 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
shashankgarewal/q-FrozenLake-v1-4x4-noSlippery
|
shashankgarewal
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 404 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shashankgarewal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BeaW/whisper-small-pyttsx3
|
BeaW
|
whisper
| 30 | 18 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['logistics']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,075 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small 2 - BeaW
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chat analysis dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.8.0
- Tokenizers 0.11.0
|
mshibatatt/ppo-Huggy
|
mshibatatt
| null | 32 | 4 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 821 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: mshibatatt/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vvn0/ppo-PyramidsRND
|
vvn0
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 830 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: vvn0/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
xavisgg/dqn-SpaceInvadersNoFrameskip-v4
|
xavisgg
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,214 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xavisgg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xavisgg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga xavisgg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Elytum/bert-finetuned-ner
|
Elytum
|
bert
| 10 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,541 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [gaunernst/bert-small-uncased](https://huggingface.co/gaunernst/bert-small-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0186
- Precision: 0.9941
- Recall: 0.9952
- F1: 0.9946
- Accuracy: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0277 | 1.0 | 2500 | 0.0190 | 0.9929 | 0.9939 | 0.9934 | 0.9956 |
| 0.0137 | 2.0 | 5000 | 0.0180 | 0.9935 | 0.9951 | 0.9943 | 0.9960 |
| 0.0095 | 3.0 | 7500 | 0.0186 | 0.9941 | 0.9952 | 0.9946 | 0.9963 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
RocioUrquijo/clasificador-muchocine
|
RocioUrquijo
|
electra
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,367 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4310
- Accuracy: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3207 | 0.4219 |
| 1.3692 | 2.0 | 776 | 1.2987 | 0.4516 |
| 0.9398 | 3.0 | 1164 | 1.4310 | 0.4477 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pittawat/dqn-SpaceInvadersNoFrameskip-v4
|
pittawat
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,217 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pittawat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pittawat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pittawat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
vodanh4299/ppo-LunarLander-v2
|
vodanh4299
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skywalker0803r/my_awesome_new_title_model
|
skywalker0803r
|
t5
| 9 | 9 |
transformers
| 0 |
text2text-generation
| true | true | true | null |
['zh']
|
['CLUECorpusSmall']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,687 |
# Chinese T5
## Model description
This is the set of Chinese T5 models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
The Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their work, we released a series of Chinese T5 models.
You can download the set of Chinese T5 models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **T5-Small** | [**L=6/H=512 (Small)**][small] |
| **T5-Base** | [**L=12/H=768 (Base)**][base] |
In T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token.
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of T5-Small):
```python
>>> from transformers import BertTokenizer, T5ForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
>>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False)
[{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of T5-Small
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--dataset_path cluecorpussmall_t5_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor t5
```
```
python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--config_path models/t5/small_config.json \
--output_model_path models/cluecorpussmall_t5_small_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-3 --batch_size 64 \
--span_masking --span_geo_prob 0.3 --span_max_length 5
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--dataset_path cluecorpussmall_t5_small_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor t5
```
```
python3 pretrain.py --dataset_path cluecorpussmall_t5_seq512_dataset.pt \
--vocab_path models/google_zh_with_sentinel_vocab.txt \
--pretrained_model_path models/cluecorpussmall_t5_small_seq128_model.bin-1000000 \
--config_path models/t5/small_config.json \
--output_model_path models/cluecorpussmall_t5_small_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-4 --batch_size 16 \
--span_masking --span_geo_prob 0.3 --span_max_length 5
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path cluecorpussmall_t5_small_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 6 \
--type t5
```
### BibTeX entry and citation info
```
@article{2020t5,
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
journal = {Journal of Machine Learning Research},
pages = {1-67},
year = {2020}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[small]:https://huggingface.co/uer/t5-small-chinese-cluecorpussmall
[base]:https://huggingface.co/uer/t5-base-chinese-cluecorpussmall
|
kasrahabib/XXX08_02_23__-bucket-finetunned
|
kasrahabib
|
bert
| 12 | 19 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,834 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/XXX08_02_23__-bucket-finetunned
This model is a fine-tuned version of [kasrahabib/after_training_rus_combined_relabeled_data_from-bucket-finetunned_batch_size_16](https://huggingface.co/kasrahabib/after_training_rus_combined_relabeled_data_from-bucket-finetunned_batch_size_16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0316
- Validation Loss: 0.3645
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8010, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3976 | 0.3499 | 0 |
| 0.2199 | 0.3588 | 1 |
| 0.1392 | 0.3404 | 2 |
| 0.0962 | 0.3372 | 3 |
| 0.0684 | 0.3182 | 4 |
| 0.0595 | 0.3414 | 5 |
| 0.0411 | 0.3519 | 6 |
| 0.0394 | 0.3500 | 7 |
| 0.0338 | 0.3647 | 8 |
| 0.0316 | 0.3645 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tomasabril/unit1
|
tomasabril
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LouisDT/videomae-base-finetuned
|
LouisDT
|
videomae
| 10 | 0 |
transformers
| 0 |
video-classification
| true | false | false |
cc-by-nc-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,495 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5590
- Accuracy: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 135
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 0.21 | 28 | 0.6078 | 0.8098 |
| 0.7383 | 1.21 | 56 | 0.6975 | 0.4728 |
| 0.6853 | 2.21 | 84 | 0.6637 | 0.6957 |
| 0.7065 | 3.21 | 112 | 0.5590 | 0.8641 |
| 0.6673 | 4.17 | 135 | 0.5766 | 0.8587 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jannikskytt/a2c-AntBulletEnv-v0
|
jannikskytt
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 201 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
sunwooooong/xlm-roberta-base-finetuned-panx-de-fr
|
sunwooooong
|
xlm-roberta
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1656
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2905 | 1.0 | 715 | 0.1783 | 0.8310 |
| 0.1461 | 2.0 | 1430 | 0.1600 | 0.8455 |
| 0.0948 | 3.0 | 2145 | 0.1656 | 0.8589 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hypefi/my_awesome_swag_model
|
hypefi
|
bert
| 12 | 3 |
transformers
| 0 |
multiple-choice
| true | false | false |
apache-2.0
| null |
['swag']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,327 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_swag_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0107
- Accuracy: 0.7899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7454 | 1.0 | 4597 | 0.6122 | 0.7662 |
| 0.3786 | 2.0 | 9194 | 0.6400 | 0.7833 |
| 0.1338 | 3.0 | 13791 | 1.0107 | 0.7899 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kongacute/ppo-Huggy
|
kongacute
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 820 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: kongacute/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KoRiF/a2c-PandaReachDense-v2
|
KoRiF
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jakub014/bert-base-uncased-finetuned-effectiveness-dagstuhl
|
jakub014
|
bert
| 13 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,477 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-effectiveness-dagstuhl
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6418
- Accuracy: 0.6190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.6729 | 0.5714 |
| No log | 2.0 | 32 | 0.6418 | 0.6190 |
| No log | 3.0 | 48 | 0.6719 | 0.5556 |
| No log | 4.0 | 64 | 0.6386 | 0.6032 |
| No log | 5.0 | 80 | 0.6559 | 0.5714 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mirfan899/t5-e2e-questions-generation
|
mirfan899
|
t5
| 11 | 47 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,649 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-e2e-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 295 | 1.6673 |
| 1.9714 | 2.0 | 590 | 1.6021 |
| 1.9714 | 3.0 | 885 | 1.5820 |
| 1.6225 | 4.0 | 1180 | 1.5665 |
| 1.6225 | 5.0 | 1475 | 1.5643 |
| 1.5252 | 6.0 | 1770 | 1.5676 |
| 1.4558 | 7.0 | 2065 | 1.5581 |
| 1.4558 | 8.0 | 2360 | 1.5600 |
| 1.4169 | 9.0 | 2655 | 1.5604 |
| 1.4169 | 10.0 | 2950 | 1.5634 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
schreon/gpt2-lhm-large-04
|
schreon
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false | null | null |
['training_corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 965 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-lhm-large-04
This model was trained from scratch on the training_corpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ecemisildar/Reinforce-1
|
ecemisildar
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EdenYav/dqn-SpaceInvadersNoFrameskip-v4
|
EdenYav
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,212 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EdenYav -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EdenYav -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga EdenYav
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0005),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-new_dataset_50e
|
Gokulapriyan
|
swin
| 18 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,415 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-new_dataset_50e
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6407
- Accuracy: 0.7973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.94 | 4 | 0.7081 | 0.6081 |
| No log | 1.94 | 8 | 0.7104 | 0.6351 |
| 0.5516 | 2.94 | 12 | 0.6911 | 0.6351 |
| 0.5516 | 3.94 | 16 | 0.7156 | 0.7027 |
| 0.537 | 4.94 | 20 | 0.7345 | 0.7297 |
| 0.537 | 5.94 | 24 | 0.6745 | 0.6892 |
| 0.537 | 6.94 | 28 | 0.7146 | 0.7297 |
| 0.5333 | 7.94 | 32 | 0.7057 | 0.6892 |
| 0.5333 | 8.94 | 36 | 0.6531 | 0.7027 |
| 0.4871 | 9.94 | 40 | 0.6405 | 0.7027 |
| 0.4871 | 10.94 | 44 | 0.6126 | 0.6892 |
| 0.4871 | 11.94 | 48 | 0.6303 | 0.7027 |
| 0.4432 | 12.94 | 52 | 0.6264 | 0.7027 |
| 0.4432 | 13.94 | 56 | 0.6347 | 0.7432 |
| 0.3669 | 14.94 | 60 | 0.6698 | 0.6622 |
| 0.3669 | 15.94 | 64 | 0.6346 | 0.7568 |
| 0.3669 | 16.94 | 68 | 0.6510 | 0.6892 |
| 0.3704 | 17.94 | 72 | 0.6491 | 0.6892 |
| 0.3704 | 18.94 | 76 | 0.5947 | 0.7568 |
| 0.3624 | 19.94 | 80 | 0.6248 | 0.7027 |
| 0.3624 | 20.94 | 84 | 0.6580 | 0.7027 |
| 0.3624 | 21.94 | 88 | 0.6345 | 0.7162 |
| 0.3164 | 22.94 | 92 | 0.6092 | 0.7568 |
| 0.3164 | 23.94 | 96 | 0.6498 | 0.7162 |
| 0.2777 | 24.94 | 100 | 0.6915 | 0.7703 |
| 0.2777 | 25.94 | 104 | 0.6482 | 0.7838 |
| 0.2777 | 26.94 | 108 | 0.6407 | 0.7973 |
| 0.2946 | 27.94 | 112 | 0.6135 | 0.7838 |
| 0.2946 | 28.94 | 116 | 0.6819 | 0.7568 |
| 0.2546 | 29.94 | 120 | 0.6401 | 0.7568 |
| 0.2546 | 30.94 | 124 | 0.6370 | 0.7432 |
| 0.2546 | 31.94 | 128 | 0.6488 | 0.7703 |
| 0.2477 | 32.94 | 132 | 0.6429 | 0.7973 |
| 0.2477 | 33.94 | 136 | 0.6540 | 0.7703 |
| 0.1968 | 34.94 | 140 | 0.5895 | 0.7973 |
| 0.1968 | 35.94 | 144 | 0.6242 | 0.7568 |
| 0.1968 | 36.94 | 148 | 0.6575 | 0.7568 |
| 0.2235 | 37.94 | 152 | 0.6263 | 0.7703 |
| 0.2235 | 38.94 | 156 | 0.6225 | 0.7838 |
| 0.2005 | 39.94 | 160 | 0.6731 | 0.7703 |
| 0.2005 | 40.94 | 164 | 0.6844 | 0.7703 |
| 0.2005 | 41.94 | 168 | 0.6550 | 0.7703 |
| 0.2062 | 42.94 | 172 | 0.6700 | 0.7703 |
| 0.2062 | 43.94 | 176 | 0.6661 | 0.7703 |
| 0.1933 | 44.94 | 180 | 0.6606 | 0.7838 |
| 0.1933 | 45.94 | 184 | 0.6757 | 0.7703 |
| 0.1933 | 46.94 | 188 | 0.6889 | 0.7568 |
| 0.1895 | 47.94 | 192 | 0.6940 | 0.7568 |
| 0.1895 | 48.94 | 196 | 0.6919 | 0.7568 |
| 0.1666 | 49.94 | 200 | 0.6899 | 0.7432 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ottovoncwim/Reinforce-PixelCopter-PLE-v0
|
ottovoncwim
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 303 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
franfram/distillbert-base-spanish-uncased-finetuned-imdb
|
franfram
|
distilbert
| 13 | 2 |
transformers
| 0 |
fill-mask
| true | false | false | null | null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,357 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-spanish-uncased-finetuned-imdb
This model is a fine-tuned version of [CenIA/distillbert-base-spanish-uncased](https://huggingface.co/CenIA/distillbert-base-spanish-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1075 | 1.0 | 157 | 2.6769 |
| 2.7807 | 2.0 | 314 | 2.5764 |
| 2.7003 | 3.0 | 471 | 2.5571 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
RocioUrquijo/clasificador-appreviews
|
RocioUrquijo
|
bert
| 10 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-sa-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 907 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-appreviews
This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jannikskytt/a2c-PandaReachDense-v2
|
jannikskytt
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 207 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
javiervela/a2c-AntBulletEnv-v0
|
javiervela
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shashankgarewal/taxi_default_parameters
|
shashankgarewal
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 385 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shashankgarewal/taxi_default_parameters", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MichalJ/dqn-SpaceInvadersNoFrameskip-v4
|
MichalJ
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,214 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MichalJ -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MichalJ -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MichalJ
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
nc33/multiqa_model
|
nc33
|
roberta
| 23 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,702 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiqa_model
This model is a fine-tuned version of [nc33/multiqa_model](https://huggingface.co/nc33/multiqa_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1150
- Precision: 0.0855
- Recall: 0.0485
- F1: 0.0619
- Accuracy: 0.9626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 327 | 0.1121 | 0.0708 | 0.0280 | 0.0402 | 0.9631 |
| 0.0786 | 2.0 | 654 | 0.1098 | 0.0531 | 0.0254 | 0.0343 | 0.9599 |
| 0.0786 | 3.0 | 981 | 0.1085 | 0.0657 | 0.0243 | 0.0354 | 0.9634 |
| 0.0681 | 4.0 | 1308 | 0.1133 | 0.0765 | 0.0453 | 0.0569 | 0.9618 |
| 0.0641 | 5.0 | 1635 | 0.1150 | 0.0855 | 0.0485 | 0.0619 | 0.9626 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Phantom-Artist/phantom-diffusion-s3-the-last-8
|
Phantom-Artist
| null | 39 | 0 | null | 2 | null | false | false | false |
creativeml-openrail-m
|
['en', 'ja']
|
['Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['art']
| false | true | true | 7,502 |
The final 8 models that are trained over [hakurei's Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion).
Each model was trained on a notable Japanese (and Taiwanese, I think) AI artist's works using dreambooth, with 30 of their works gained mainly from twitter (except for sabakichi, which I collected the dataset from pixiv).
It tooks 1000 steps to train each model, at a learning rate of 5-e6. I have shared the notebook [here](https://huggingface.co/Phantom-Artist/phantom-diffusion/blob/main/DreamBooth_Stable_Diffusion_works_with_SD_2.ipynb) in case you want to try out additional training.
Some are good at backgrounds, while others can generate a semi-realistic style as seen in many SD 1.5 anime/waifu models (and some might be a bit overfitted).
The dataset is published [here](https://huggingface.co/datasets/Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset), and the full members are listed [here](https://twitter.com/i/lists/1622444997240037377) (special thanks to Yunyalula for creating it!).
# For those who are against generative AIs
You can see that now they are the target. Take our tool. Now is the time for pay back. Generate the images in their styles, and bring back the power you had to yourself.
# For those who support the development of generative AIs
Some of the AI artists, even though they take advantage of the open strategy of Stable Diffusion, now tend to hide their prompts, trying to monopolize their style (I'm not saying the AI artists I trained are as such, to be sure).
To continue protecting our values and beliefs on the open community and fight against them trying to create another pre-modern style guilds, I will show you a new way.
You no longer need their prompts; just train their images by yourself to protect the open community. It's not only legal but also ethical, as they have been taking advantages of others' trained dataset.
# For those who call themselves "phantom 40"
I saw some caliming there should be 48, and here you go. Phantom 48, or do you like to call yourselves *PTM* 48 instead? It's up to you.
# Why will they be the last?
My initial intention on this series was a social experiment to see what will happen if the AI artists are targeted for personalized training.
As it became more popular than expected and the artists started calling themselves "phantom 20," I came up with the second intention to see how they will react after I add 20 more in one day, to see if they can adapt to the sudden change. They acted greatly, and I think that's why they could become notable.
All the reactions and the interpretations on my action were impressive, but since I have accomplished my goal, and since the main stream model will probably be SD 2.1 768 (not SD 2.1 512), I will no longer add new models.
I know I couldn't add some of the artists, but no. I will not do it under the name of phantom.
It takes me like 8 hours to train, test, and upload 20 models, and it's just unsustainable to continue doing it everyday.
**From now on, anyone who wish to add more is the next phantom. Train anyone you wish to by yourself.**
# trained artist list
- atsuwo_AI
- recommended pos: multicolored hair, cg
- fladdict
- recommended pos: oil painting/ancient relief/impressionist impasto oil painting (maybe more)
- possible neg: monkey
- Hifumi_AID
- recommended pos: dark purple hair, emerald eyes
- mayonaka_rr
- recommended pos: cg
- possible pos: dynamic posing, bikini, ponytail
- o81morimori
- possible pos: cg, in a messy apartment room with objects on the floor and the bed
- sabakichi
- possible pos 1: merging underwater, limited pallete, melting underwater, unstable outlines
- possible pos 2: rough sketch, limited pallete, ((unstable outlines)), monotone gradation, dynamic posing
- teftef
- possible pos: light skyblue hair, bun, retropunk gears of a factory
- violet_fizz
- recommended pos: beautiful face, grown up face, long eyes, expressionless
- possible pos: expressionless
# samples
The basic prompt is as follows.
However, to present you the potential of these models as much as possible, many of them have additional postive tags (such as "in the style of") to get the result below (yes, use ``aitop (ARTIST)_style`` to gain the finetuned result).
Many works better with the additional prompt ``beautiful face``. Generally speaking, prompting words close to the trained dataset will give you a better result.
```
POS: masterpiece, best quality, 1girl, aitop (ARTIST)_style
NEG: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, simple background
```
## atsuwo_AI



## fladdict



## Hifumi_AID


## mayonaka_rr



## o81morimori


## sabakichi




## teftef


## violet_fizz


|
logoyazilim/crnn_vgg16_bn_20230208-152217
|
logoyazilim
| null | 4 | 0 |
transformers
| 0 | null | true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,612 |
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "doctr-train-10k",
"val_path": null,
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 10,
"batch_size": 32,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 4,
"resume": null,
"vocab": "turkish",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": true,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
javiervela/a2c-PandaReachDense-v2
|
javiervela
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kinkpunk/poca-MLAgents-SoccerTwos-v1.2
|
kinkpunk
| null | 20 | 290 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 864 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn .\config\poca\SoccerTwos.yaml --run-id="poca-SoccerTwos-v1.2" --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: kinkpunk/poca-MLAgents-SoccerTwos-v1.2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Svetlana0303/Regression_distilbert-base-uncased
|
Svetlana0303
|
distilbert
| 16 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,512 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1187
- Mse: 2.1187
- Mae: 1.3097
- R2: -0.0932
- Accuracy: 0.1429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:--------:|
| No log | 1.0 | 2 | 3.3933 | 3.3933 | 1.5228 | -2.1839 | 0.2857 |
| No log | 2.0 | 4 | 3.0571 | 3.0571 | 1.4011 | -1.8684 | 0.4286 |
| No log | 3.0 | 6 | 2.6747 | 2.6747 | 1.2786 | -1.5096 | 0.4286 |
| No log | 4.0 | 8 | 2.3024 | 2.3024 | 1.2088 | -1.1603 | 0.4286 |
| No log | 5.0 | 10 | 1.9496 | 1.9496 | 1.1459 | -0.8292 | 0.4286 |
| No log | 6.0 | 12 | 1.6637 | 1.6637 | 1.1225 | -0.5610 | 0.2857 |
| No log | 7.0 | 14 | 1.4167 | 1.4167 | 1.0938 | -0.3293 | 0.1429 |
| No log | 8.0 | 16 | 1.2365 | 1.2365 | 1.0609 | -0.1602 | 0.0 |
| No log | 9.0 | 18 | 1.1239 | 1.1239 | 1.0234 | -0.0545 | 0.0 |
| No log | 10.0 | 20 | 1.0879 | 1.0879 | 0.9906 | -0.0207 | 0.0 |
| No log | 11.0 | 22 | 1.1122 | 1.1122 | 0.9599 | -0.0436 | 0.2857 |
| No log | 12.0 | 24 | 1.1879 | 1.1879 | 0.9374 | -0.1145 | 0.2857 |
| No log | 13.0 | 26 | 1.2784 | 1.2784 | 0.9132 | -0.1995 | 0.4286 |
| No log | 14.0 | 28 | 1.3756 | 1.3756 | 0.8905 | -0.2907 | 0.4286 |
| No log | 15.0 | 30 | 1.4710 | 1.4710 | 0.9093 | -0.3802 | 0.4286 |
| No log | 16.0 | 32 | 1.5513 | 1.5513 | 0.9333 | -0.4555 | 0.4286 |
| No log | 17.0 | 34 | 1.6094 | 1.6094 | 0.9491 | -0.5101 | 0.5714 |
| No log | 18.0 | 36 | 1.6446 | 1.6446 | 0.9567 | -0.5431 | 0.5714 |
| No log | 19.0 | 38 | 1.6510 | 1.6510 | 0.9555 | -0.5491 | 0.5714 |
| No log | 20.0 | 40 | 1.6425 | 1.6425 | 0.9503 | -0.5412 | 0.5714 |
| No log | 21.0 | 42 | 1.6254 | 1.6254 | 0.9455 | -0.5251 | 0.5714 |
| No log | 22.0 | 44 | 1.6025 | 1.6025 | 0.9378 | -0.5036 | 0.5714 |
| No log | 23.0 | 46 | 1.5758 | 1.5758 | 0.9289 | -0.4786 | 0.5714 |
| No log | 24.0 | 48 | 1.5583 | 1.5583 | 0.9233 | -0.4622 | 0.5714 |
| No log | 25.0 | 50 | 1.5504 | 1.5504 | 0.9210 | -0.4547 | 0.5714 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
SRDdev/QABERT-small
|
SRDdev
|
distilbert
| 8 | 7 |
transformers
| 0 |
question-answering
| true | false | false | null |
['en']
|
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering']
| false | true | true | 3,914 |
# QA-BERT
QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there.
## Dataset
The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike.
Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset.
<details>
<summary><i>Structure of the Data Dictonary</i></summary>
<!--All you need is a blank line-->
{
"data":[
{
"title":"Article Title",
"paragraphs":[
{
"context":"The context text of the paragraph",
"qas":[
{
"question":"The question asked about the context",
"id":"A unique identifier for the question",
"answers":[
{
"text":"The answer to the question",
"answer_start":"The starting index of the answer in the context"
}
]
}
]
}
]
}
],
"version":"The version of the SQuAD dataset"
}
</details>
## Model
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers.
<img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc">
For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind)
## Inference
_Load model_
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small")
QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small")
```
_context_
```text
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
```
_Build Pipeline_
```python
from transformers import pipeline
ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer)
result = ask(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}'")
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.
Please make sure to update tests as appropriate.
## Citations
```
@citation{ QA-BERT-small,
author = {Shreyas Dixit},
year = {2023},
url = {https://huggingface.co/SRDdev/QA-BERT-small}
}
```
|
plai-edp-test/bert_base_spanish_wwm_cased
|
plai-edp-test
|
bert
| 8 | 3 |
transformers
| 0 |
fill-mask
| true | false | false | null |
['es']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['masked-lm']
| false | true | true | 5,859 |
# BETO: Spanish BERT
BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models.
## Download
| | | | |
|-|:--------:|:-----:|:----:|
|BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) |
|BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) |
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
## Benchmarks
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found [here](#references).
|Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results |
|-------|--------------:|--------------:|--------------------------:|-------------------------------:|
|[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] |
|[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] |
|[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] |
|[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] |
|[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]|
## Example of use
For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html).
BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library.
An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1uRwg4UmPgYIqGYY4gW_Nsw9782GFJbPt).
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉)
## Acknowledgments
We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/)
that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
## Citation
[Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf)
To cite this resource in a publication please use the following:
```
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
```
## License Disclaimer
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
## References
* [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md)
* [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf)
* [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf)
* [4] [LASER](https://arxiv.org/abs/1812.10464)
* [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf)
* [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf)
* [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf)
* [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
|
PeterBanning71/t5-small-finetuned-xsum-finetuned-bioMedv2
|
PeterBanning71
|
t5
| 12 | 9 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,181 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-finetuned-bioMedv2
This model is a fine-tuned version of [PeterBanning71/t5-small-finetuned-xsum](https://huggingface.co/PeterBanning71/t5-small-finetuned-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1056
- Rouge1: 4.8565
- Rouge2: 0.4435
- Rougel: 3.9735
- Rougelsum: 4.415
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 8.4025 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 2.0 | 2 | 8.4025 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 3.0 | 3 | 7.7250 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 4.0 | 4 | 7.1617 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 5.0 | 5 | 6.7113 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 6.0 | 6 | 6.3646 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 7.0 | 7 | 6.1056 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 8.0 | 8 | 6.1056 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
plai-edp-test/distilbert_base_uncased
|
plai-edp-test
|
distilbert
| 8 | 0 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert']
| false | true | true | 8,470 |
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
pneubauer/basic-poca-SoccerTwos
|
pneubauer
| null | 7 | 287 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 849 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: pneubauer/basic-poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fathyshalab/massive_social-roberta-large-v1-1
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,456 |
# fathyshalab/massive_social-roberta-large-v1-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
akoshel/dqn-SpaceInvadersNoFrameskip-v4
|
akoshel
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,213 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akoshel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akoshel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga akoshel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
PeterBanning71/t5-small-finetuned-xsum-finetuned-bioMedv3
|
PeterBanning71
|
t5
| 12 | 8 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,181 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-finetuned-bioMedv3
This model is a fine-tuned version of [PeterBanning71/t5-small-finetuned-xsum](https://huggingface.co/PeterBanning71/t5-small-finetuned-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1056
- Rouge1: 4.8565
- Rouge2: 0.4435
- Rougel: 3.9735
- Rougelsum: 4.415
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 8.4025 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 2.0 | 2 | 8.4025 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 3.0 | 3 | 7.7250 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 4.0 | 4 | 7.1617 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 5.0 | 5 | 6.7113 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 6.0 | 6 | 6.3646 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 7.0 | 7 | 6.1056 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
| No log | 8.0 | 8 | 6.1056 | 4.8565 | 0.4435 | 3.9735 | 4.415 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hectorjelly/Kats_Komets
|
hectorjelly
| null | 20 | 281 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Kats_Komets
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Axel578/flan_t5_summarization
|
Axel578
|
t5
| 15 | 15 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,317 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_summarization
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6162
- Rouge1: 15.9418
- Rouge2: 7.4447
- Rougel: 15.5655
- Rougelsum: 15.5835
- Gen Len: 18.7313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 272 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7405 | 2.0 | 544 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7405 | 3.0 | 816 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7453 | 4.0 | 1088 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7453 | 5.0 | 1360 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7372 | 6.0 | 1632 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7372 | 7.0 | 1904 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7436 | 8.0 | 2176 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7436 | 9.0 | 2448 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7425 | 10.0 | 2720 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mtlulka/poca-SoccerTwos
|
mtlulka
| null | 20 | 287 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: mtlulka/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akiFQC/japanese-dialogpt-small-aozora
|
akiFQC
|
gpt2
| 9 | 20 |
transformers
| 0 |
conversational
| true | false | false | null |
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational', 'ja', 'japanese', 'gpt2', 'text-generation', 'lm', 'nlp']
| false | true | true | 939 |
# Japanese DialoGPT trained with Aozora
**(ja) 青空文庫のセリフで学習した日本語のDialoGPT Smallです**
**(en) Japanese DialoGPT Small trained on Aozora Bunko.**
## [Demo](https://huggingface.co/spaces/akiFQC/Japanese_DialoGPT_small_Aozora)
Demo in this page is not working so well. I recommend you to try it on [Hugging Face Spaces Version](https://huggingface.co/spaces/akiFQC/Japanese_DialoGPT_small_Aozora).
## Reference
- [Aozora-bunko](https://www.aozora.gr.jp/)
- Japanese public domain books.
- I extracted the dialogue part from the books and used it as the training data.
- [japanese-gpt2-small](https://huggingface.co/rinna/japanese-gpt2-small)
- Novel Japanese GPT2. I used a small model because of the limitation of GPU memory of my desktop PC(with RTX3060x1) 😢.
- I used this model as a pre-trained model.
- [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536)
|
jafonz/en_pipeline
|
jafonz
| null | 16 | 62 |
spacy
| 0 |
token-classification
| false | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 678 |
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.4,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `TEMTEM` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 0.00 |
| `NER_LOSS` | 0.00 |
|
azaazato/q-FrozenLake-v1-4x4-noSlippery
|
azaazato
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="azaazato/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
azaazato/q-Taxi-v3-v1
|
azaazato
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 367 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="azaazato/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
azaazato/q-Taxi-v3
|
azaazato
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="azaazato/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fathyshalab/massive_social-roberta-large-v1-2
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,456 |
# fathyshalab/massive_social-roberta-large-v1-2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jojoUla/bert-large-cased-sigir-support-no-label-20-sigir-tune2nd-LR10-labelled-30
|
jojoUla
|
bert
| 14 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,787 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-no-label-20-sigir-tune2nd-LR10-labelled-30
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-20](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-20) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1303 | 1.0 | 1 | 3.2415 |
| 2.3107 | 2.0 | 2 | 2.1225 |
| 1.2824 | 3.0 | 3 | 2.2623 |
| 1.0548 | 4.0 | 4 | 0.5449 |
| 1.1366 | 5.0 | 5 | 1.1446 |
| 0.5947 | 6.0 | 6 | 0.3811 |
| 0.4889 | 7.0 | 7 | 1.6445 |
| 1.2689 | 8.0 | 8 | 1.7214 |
| 0.8074 | 9.0 | 9 | 2.3152 |
| 0.7084 | 10.0 | 10 | 0.9325 |
| 1.0307 | 11.0 | 11 | 2.4217 |
| 0.7119 | 12.0 | 12 | 2.6455 |
| 1.0052 | 13.0 | 13 | 1.1594 |
| 0.7125 | 14.0 | 14 | 1.2795 |
| 0.4732 | 15.0 | 15 | 0.1245 |
| 0.8829 | 16.0 | 16 | 1.8585 |
| 0.7079 | 17.0 | 17 | 1.6644 |
| 0.6243 | 18.0 | 18 | 1.6117 |
| 1.2438 | 19.0 | 19 | 2.3044 |
| 1.0812 | 20.0 | 20 | 4.5037 |
| 0.7003 | 21.0 | 21 | 1.5862 |
| 0.867 | 22.0 | 22 | 2.1851 |
| 0.9098 | 23.0 | 23 | 1.6055 |
| 0.6214 | 24.0 | 24 | 2.6699 |
| 0.282 | 25.0 | 25 | 1.3515 |
| 0.1888 | 26.0 | 26 | 2.3864 |
| 0.6863 | 27.0 | 27 | 1.2444 |
| 0.8527 | 28.0 | 28 | 1.9603 |
| 0.9416 | 29.0 | 29 | 3.7045 |
| 0.8302 | 30.0 | 30 | 0.9336 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ahmad1289/reinforce-pixelcopter-v1
|
ahmad1289
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sheraz179/videomae-base-finetuned
|
sheraz179
|
videomae
| 7 | 0 |
transformers
| 0 |
video-classification
| true | false | false |
cc-by-nc-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,486 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8337
- Accuracy: 0.4821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1110
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7399 | 0.1 | 112 | 2.6561 | 0.0536 |
| 2.6817 | 1.1 | 224 | 2.5898 | 0.0893 |
| 2.5075 | 2.1 | 336 | 2.5883 | 0.125 |
| 2.0081 | 3.1 | 448 | 2.4125 | 0.1429 |
| 1.4701 | 4.1 | 560 | 2.2446 | 0.2857 |
| 1.328 | 5.1 | 672 | 2.3491 | 0.2679 |
| 0.9474 | 6.1 | 784 | 1.7119 | 0.4643 |
| 0.5094 | 7.1 | 896 | 1.7790 | 0.4464 |
| 0.2963 | 8.1 | 1008 | 1.8519 | 0.4821 |
| 0.0614 | 9.09 | 1110 | 1.8337 | 0.4821 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p100_pt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,002 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100_pt0.1 --start-policy-f 100000 --end-policy-f 100000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 100000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p100_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
mili7522/dqn-SpaceInvadersNoFrameskip-v4
|
mili7522
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,218 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mili7522 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mili7522 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mili7522
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
muhtasham/santacoder-finetuned-the-stack-cobol
|
muhtasham
|
gpt2
| 17 | 7 |
transformers
| 1 |
text-generation
| true | false | false |
openrail
|
['code']
|
['bigcode/the-stack-dedup']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'code', 'codegen', 'assembly']
| true | true | true | 3,231 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-cobol
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an The Stack [cobol](https://huggingface.co/datasets/bigcode/the-stack-dedup) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7161
## Model description
The [SantaCoder](https://huggingface.co/bigcode/santacoder) models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
In addition, there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
## Intended uses & limitations
The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
## Training and evaluation data
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3911 | 0.1 | 100 | 1.1141 |
| 0.9478 | 0.2 | 200 | 0.9735 |
| 0.784 | 0.3 | 300 | 0.8497 |
| 0.4702 | 0.4 | 400 | 0.7686 |
| 0.6133 | 0.5 | 500 | 0.7375 |
| 0.5396 | 0.6 | 600 | 0.7265 |
| 0.3937 | 0.7 | 700 | 0.6952 |
| 0.5691 | 0.8 | 800 | 0.7059 |
| 0.6366 | 0.9 | 900 | 0.7069 |
| 0.3661 | 1.0 | 1000 | 0.7161 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
regnore/Amemori_Sayo_LoRA
|
regnore
| null | 44 | 0 | null | 0 | null | false | false | false |
cc-by-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 340 |
+ Amemori_Sayo:

+ Amemori_Sayo_NF:

+ additional prompts you may need to get better results:
`black hair`, `sailor dress`, `double braids`, `straight on`
|
pyflynn/q-FrozenLake-v1-4x4-noSlippery
|
pyflynn
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pyflynn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pfunk/Pong-v4-DQPN_p500_pt0.1_tt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,050 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500_pt0.1_tt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p500_pt0.1_tt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p500_pt0.1_tt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1_tt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1_tt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_pt0.1_tt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p500_pt0.1_tt0.1 --start-policy-f 500000 --end-policy-f 500000 --evaluation-fraction 1.00 --target-tau 0.1 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 500000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p500_pt0.1_tt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 500000,
'target_network_frequency': 1000,
'target_tau': 0.1,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
DaniilSirota/ppo-SnowballTarget
|
DaniilSirota
| null | 20 | 0 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 859 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: DaniilSirota/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
apatidar0/my_awesome_billsum_model
|
apatidar0
|
t5
| 12 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['billsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 924 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mshibatatt/q-FrozenLake-v1-4x4-noSlippery
|
mshibatatt
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 399 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mshibatatt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mshibatatt/q-Taxi-v3
|
mshibatatt
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 366 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mshibatatt/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
irenekar/q-FrozenLake-v1-4x4-noSlippery
|
irenekar
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="irenekar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
irenekar/taxiv3
|
irenekar
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 361 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="irenekar/taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeepaKrish/distilbert-base-uncased-finetuned-squad
|
DeepaKrish
|
distilbert
| 18 | 1 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,225 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 78 | 0.7144 |
| No log | 2.0 | 156 | 0.3996 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.13.2
|
mwissing/dqn-SpaceInvadersNoFrameskip-v4
|
mwissing
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,217 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mwissing -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mwissing -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mwissing
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
quartz14/Reinforce-cartpole
|
quartz14
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
frangiral/dqn-SpaceInvadersNoFrameskip-v4
|
frangiral
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,219 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga frangiral
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pyflynn/taxi-v3-model-v0
|
pyflynn
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 370 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pyflynn/taxi-v3-model-v0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rishabhjain16/whisper_tiny_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,698 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2034
- Wer: 5.3823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0814 | 10.0 | 500 | 0.1915 | 6.6701 |
| 0.0045 | 20.0 | 1000 | 0.1816 | 5.5088 |
| 0.0016 | 30.01 | 1500 | 0.1924 | 5.5014 |
| 0.0009 | 40.01 | 2000 | 0.1959 | 5.5609 |
| 0.0006 | 51.0 | 2500 | 0.1989 | 5.4195 |
| 0.0005 | 61.0 | 3000 | 0.2014 | 5.4418 |
| 0.0004 | 71.01 | 3500 | 0.2030 | 5.3674 |
| 0.0004 | 81.01 | 4000 | 0.2034 | 5.3823 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_tiny_en_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,707 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-tiny.en
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Wer: 6.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1174 | 10.0 | 500 | 0.1975 | 6.4170 |
| 0.0034 | 20.0 | 1000 | 0.1896 | 5.2259 |
| 0.0012 | 30.01 | 1500 | 0.2040 | 6.6478 |
| 0.0007 | 40.01 | 2000 | 0.2080 | 6.6404 |
| 0.0005 | 51.0 | 2500 | 0.2117 | 6.5957 |
| 0.0004 | 61.0 | 3000 | 0.2139 | 6.5510 |
| 0.0003 | 71.01 | 3500 | 0.2162 | 6.5883 |
| 0.0003 | 81.01 | 4000 | 0.2166 | 6.5585 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_base_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,698 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1929
- Wer: 4.3549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0326 | 10.0 | 500 | 0.1670 | 5.0398 |
| 0.0019 | 20.0 | 1000 | 0.1728 | 4.5113 |
| 0.0008 | 30.01 | 1500 | 0.1820 | 4.4071 |
| 0.0005 | 40.01 | 2000 | 0.1847 | 4.3773 |
| 0.0004 | 51.0 | 2500 | 0.1886 | 4.3549 |
| 0.0003 | 61.0 | 3000 | 0.1910 | 4.3475 |
| 0.0003 | 71.01 | 3500 | 0.1925 | 4.3549 |
| 0.0002 | 81.01 | 4000 | 0.1929 | 4.3549 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_base_en_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,707 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base.en
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1913
- Wer: 3.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0489 | 10.0 | 500 | 0.1624 | 8.5536 |
| 0.0019 | 20.0 | 1000 | 0.1682 | 4.0051 |
| 0.0007 | 30.01 | 1500 | 0.1782 | 4.1167 |
| 0.0004 | 40.01 | 2000 | 0.1823 | 4.0497 |
| 0.0003 | 51.0 | 2500 | 0.1861 | 3.9827 |
| 0.0002 | 61.0 | 3000 | 0.1888 | 3.9753 |
| 0.0002 | 71.01 | 3500 | 0.1907 | 3.9678 |
| 0.0002 | 81.01 | 4000 | 0.1913 | 3.9530 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_small_en_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,710 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small.en
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1764
- Wer: 2.9777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0179 | 10.0 | 500 | 0.1422 | 3.4691 |
| 0.0006 | 20.0 | 1000 | 0.1530 | 3.0001 |
| 0.0004 | 30.01 | 1500 | 0.1631 | 3.0150 |
| 0.0002 | 40.01 | 2000 | 0.1672 | 2.9777 |
| 0.0001 | 51.0 | 2500 | 0.1717 | 2.9703 |
| 0.0001 | 61.0 | 3000 | 0.1742 | 2.9926 |
| 0.0001 | 71.01 | 3500 | 0.1759 | 2.9852 |
| 0.0001 | 81.01 | 4000 | 0.1764 | 2.9777 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_small_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,723 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1815
- Wer: 206.4766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0065 | 10.0 | 500 | 0.1476 | 109.2459 |
| 0.0006 | 20.0 | 1000 | 0.1683 | 144.5619 |
| 0.0012 | 30.01 | 1500 | 0.1623 | 205.1738 |
| 0.0002 | 40.01 | 2000 | 0.1710 | 152.7209 |
| 0.0001 | 51.0 | 2500 | 0.1760 | 171.9869 |
| 0.0001 | 61.0 | 3000 | 0.1789 | 193.3447 |
| 0.0001 | 71.01 | 3500 | 0.1808 | 201.9206 |
| 0.0001 | 81.01 | 4000 | 0.1815 | 206.4766 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_medium_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,725 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Wer: 21.8343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0269 | 5.0 | 500 | 0.1069 | 118.0302 |
| 0.0049 | 10.01 | 1000 | 0.1263 | 135.2788 |
| 0.0009 | 15.01 | 1500 | 0.1355 | 94.5731 |
| 0.0001 | 20.01 | 2000 | 0.1413 | 7.5188 |
| 0.0001 | 25.01 | 2500 | 0.1515 | 7.2508 |
| 0.0001 | 30.02 | 3000 | 0.1568 | 24.8493 |
| 0.0 | 35.02 | 3500 | 0.1588 | 22.1470 |
| 0.0 | 40.02 | 4000 | 0.1594 | 21.8343 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_medium_en_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,713 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium.en
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- Wer: 2.7097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0329 | 5.0 | 500 | 0.1343 | 4.0125 |
| 0.0013 | 10.01 | 1000 | 0.1531 | 2.8810 |
| 0.0002 | 15.01 | 1500 | 0.1609 | 2.7321 |
| 0.0002 | 20.01 | 2000 | 0.1608 | 2.7544 |
| 0.0001 | 25.01 | 2500 | 0.1688 | 2.7321 |
| 0.0002 | 30.02 | 3000 | 0.1722 | 2.7172 |
| 0.0001 | 35.02 | 3500 | 0.1742 | 2.7172 |
| 0.0001 | 40.02 | 4000 | 0.1748 | 2.7097 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_large_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,711 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1412
- Wer: 6.7893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0475 | 2.03 | 500 | 0.1095 | 62.6591 |
| 0.0201 | 5.01 | 1000 | 0.1225 | 16.9285 |
| 0.0044 | 7.03 | 1500 | 0.1312 | 3.6701 |
| 0.0026 | 10.01 | 2000 | 0.1278 | 7.9506 |
| 0.0001 | 12.04 | 2500 | 0.1323 | 17.9186 |
| 0.0001 | 15.02 | 3000 | 0.1386 | 16.3031 |
| 0.0001 | 17.05 | 3500 | 0.1403 | 6.7074 |
| 0.0 | 20.02 | 4000 | 0.1412 | 6.7893 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_large_v2_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,732 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1534
- Wer: 145.6786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0799 | 2.03 | 500 | 0.1010 | 28.1322 |
| 0.0239 | 5.01 | 1000 | 0.1388 | 161.0139 |
| 0.0066 | 7.03 | 1500 | 0.1221 | 99.3747 |
| 0.0007 | 10.01 | 2000 | 0.1295 | 250.8822 |
| 0.0007 | 12.04 | 2500 | 0.1423 | 77.2203 |
| 0.0003 | 15.02 | 3000 | 0.1480 | 149.4380 |
| 0.0001 | 17.05 | 3500 | 0.1518 | 141.5842 |
| 0.0001 | 20.02 | 4000 | 0.1534 | 145.6786 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
SashkaHavr/NLP4Web_Home_Exercise6_Group13
|
SashkaHavr
|
bert
| 19 | 19 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 980 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP4Web_Home_Exercise6_Group13
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nasheed/rl-course
|
nasheed
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
summervent/speller-t5-909_both_
|
summervent
|
t5
| 17 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,265 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speller-t5-909_both_
This model is a fine-tuned version of [sberbank-ai/ruT5-large](https://huggingface.co/sberbank-ai/ruT5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0771
- Rouge1: 20.0565
- Rouge2: 7.9096
- Rougel: 20.1271
- Rougelsum: 20.1977
- Gen Len: 41.2712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1653 | 0.1 | 1500 | 0.1176 | 19.8446 | 7.4011 | 19.8446 | 19.9153 | 41.2712 |
| 0.2083 | 0.2 | 3000 | 0.1023 | 19.7034 | 8.7571 | 19.7034 | 19.774 | 41.1186 |
| 0.1617 | 0.31 | 4500 | 0.0975 | 19.2797 | 7.9096 | 19.2797 | 19.209 | 41.2797 |
| 0.17 | 0.41 | 6000 | 0.0949 | 20.5508 | 8.7571 | 20.5862 | 20.6215 | 41.2712 |
| 0.1416 | 0.51 | 7500 | 0.0871 | 20.0565 | 7.9096 | 20.1271 | 20.1977 | 41.1017 |
| 0.1409 | 0.61 | 9000 | 0.0807 | 20.0565 | 7.9096 | 20.1271 | 20.1977 | 41.1695 |
| 0.1094 | 0.72 | 10500 | 0.0746 | 19.9859 | 7.6271 | 19.9506 | 19.9859 | 41.2627 |
| 0.1256 | 0.82 | 12000 | 0.0754 | 19.9859 | 7.6271 | 19.9506 | 19.9859 | 41.2119 |
| 0.1206 | 0.92 | 13500 | 0.0771 | 20.0565 | 7.9096 | 20.1271 | 20.1977 | 41.2712 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/massive_social-roberta-large-v1-2-0.13
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,466 |
# fathyshalab/massive_social-roberta-large-v1-2-0.13
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-2-0.13")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
pabloac31/ppo-SnowballTarget
|
pabloac31
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 856 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: pabloac31/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BSC-LT/sciroshot
|
BSC-LT
|
roberta
| 11 | 8 |
transformers
| 0 |
zero-shot-classification
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['zero-shot', 'text-classification', 'science', 'mag']
| false | true | true | 8,424 |
# SCIroShot
## Overview
<details>
<summary>Click to expand</summary>
- **Model type:** Language Model
- **Architecture:** RoBERTa-large
- **Language:** English
- **License:** Apache 2.0
- **Task:** Zero-Shot Text Classification
- **Data:** Microsoft Academic Graph
- **Additional Resources:**
- [Paper]() <-- WiP (soon to be published in EACL 2023)
- [GitHub](https://github.com/TeMU-BSC/sciroshot)
</details>
## Model description
SCIroShot is an entailment-based Zero-Shot Text Classification model that
has been fine-tuned using a self-made dataset composed of scientific articles
from [Microsoft Academic Graph](https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/)
(MAG). The resulting model achieves SOTA
performance in the scientific domain and very competitive results in other areas.
## Intended Usage
This model is intended to be used for zero-shot text classification in English.
## How to use
```python
from transformers import pipeline
zstc = pipeline("zero-shot-classification", model="BSC-LT/sciroshot")
sentence = "Leo Messi is the best player ever."
candidate_labels = ["politics", "science", "sports", "environment"]
template = "This example is {}"
output = zstc(sentence, candidate_labels, hypothesis_template=template, multi_label=False)
print(output)
print(f'Predicted class: {output["labels"][0]}')
```
## Limitations and bias
No measures have been taken to estimate the bias and toxicity embedded in the model.
Even though the fine-tuning data (which is of a scientific nature) may seem harmless, it is important to note that the corpus used to pre-train the vanilla model is very likely to contain a lot of unfiltered content from the internet, as stated in the [RoBERTa-large model card](https://huggingface.co/roberta-large#limitations-and-bias).
## Training
### Training data
Our data builds on top of scientific-domain
annotated data from Microsoft Academic Graph (MAG).
This database consists of a heterogeneous
graph with billions of records from both scientific
publications and patents, in addition to metadata
information such as the authors, institutions, journals,
conferences and their citation relationships.
The documents are organized in a six-level hierarchical
structure of scientific concepts, where the two
top-most levels are manually curated in order to
guarantee a high level of accuracy.
To create the training corpus, a random sample of
scientific articles with a publication year between
2000 and 2021 were retrieved from MAG with their respective
titles and abstracts in English. This results in over 2M documents
with their corresponding Field Of Study, which was obtained from
the 1-level MAG taxonomy (292 possible classes, such as "Computational biology"
or "Transport Engineering").
The fine-tuning dataset was constructed in a weakly supervised
manner by converting text classification data to the entailment format.
Using the relationship between scientific texts
and their matching concepts in the 1-level MAG
taxonomy we are able to generate the premise-
hypothesis pairs corresponding to the entailment
label. Conversely, we generate the pairs for the
neutral label by removing the actual relationship
between the texts and their scientific concepts and
creating a virtual relationship with those to which
they are not matched.
### Training procedure
The newly-created scientific dataset described in the previous section
was used to fine-tune a 355M parameters RoBERTa model on the entailment task.
To do so, the model has to compute the entailment score between every text that
is fed to it and all candidate labels. The final prediction would be the
highest-scoring class in a single-label classification setup, or the N classes
above a certain threshold in a multi-label scenario.
A subset of 52 labels from the training data were kept apart so that they
could be used as a development set of fully-unseen classes.
As a novelty, the validation was not performed on the entailment task (which is used a proxy)
but directly on the target text classification task. This allows us to stop training at the right
time via early stopping, which prevents the model from "overfitting" to the training task. This method
was our way to counteract an effect that was empirically discovered during the experimentation period, where it was observed
that after a certain point the model can start to worsen in the target task (ZSTC) despite still continuing to
improve in the training task (RTE). The simple act of shortening the training time led to a boost in performance.
Read the paper for more details on the methodology and the analysis of RTE/ZSTC correlation.
## Evaluation
### Evaluation data
The model's performance was evaluated on a collection of disciplinary-labeled textual datasets, both from the scientific domain (closer to training data) and the general domain (to assess generalizability).
The following table provides an overview of the number of examples and labels for each dataset:
| Dataset | Labels | Size |
|------------------|--------|--------|
| arXiv | 11 | 3,838 |
| SciDocs-MeSH | 11 | 16,433 |
| SciDocs-MAG | 19 | 17,501 |
| Konstanz | 24 | 10,000 |
| Elsevier | 26 | 14,738 |
| PubMed | 109 | 5,000 |
| Topic Categorization (Yahoo! Answers) | 10 | 60,000 |
| Emotion Detection (UnifyEmotion) | 10 | 15,689 |
| Situation Frame Detection (Situation Typing) | 12 | 3,311 |
Please refer to the paper for further details on each particular dataset.
### Evaluation results
These are the official results reported in the paper:
#### Scientific domain benchmark
| Model | arXiv | SciDocs-MesH | SciDocs-MAG | Konstanz | Elsevier | PubMed |
|-------|-------|--------------|-------------|----------|----------|--------|
| [fb/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) | 33.28 | **66.18**🔥 | 51.77 | 54.62 | 28.41 | **31.59**🔥 |
| SCIroShot | **42.22**🔥 | 59.34 | **69.86**🔥 | **66.07**🔥 | **54.42**🔥 | 27.93 |
#### General domain benchmark
| Model | Topic | Emotion | Situation |
|-------|-------|---------|-----------|
| RTE [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 43.8 | 12.6 | **37.2**🔥 |
| FEVER [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 40.1 | 24.7 | 21.0 |
| MNLI [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 37.9 | 22.3 | 15.4 |
| NSP [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 50.6 | 16.5 | 25.8 |
| NSP-Reverse [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 53.1 | 16.1 | 19.9 |
| SCIroShot | **59.08**🔥 | **24.94**🔥 | 27.42
All the numbers reported above represent **label-wise weighted F1** except for the Topic classification dataset, which is evaluated in terms of **accuracy** following the notation from [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf).
## Additional information
### Authors
- SIRIS Lab, Research Division of SIRIS Academic.
- Language Technologies Unit, Barcelona Supercomputing Center.
### Contact
For further information, send an email to either <[email protected]> or <[email protected]>.
### License
This work is distributed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Funding
This work was partially funded by 2 projects under EU’s H2020 Research and Innovation Programme:
- INODE (grant agreement No 863410).
- IntelComp (grant agreement No 101004870).
### Citation
```bibtex
Soon to be published in EACL 2023.
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose
and is made available to third parties under a Apache v2.0 License.
Please keep in mind that the model may have bias and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model
(or a system based on it) or become users of the model itself, they should note that it is under
their responsibility to mitigate the risks arising from its use and, in any event, to comply with
applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owners and creators of the model be liable for any results arising from the use made by third parties.
</details>
|
JD97/Riffusion_sentiment_LoRA
|
JD97
| null | 4 | 0 |
diffusers
| 1 |
text-to-image
| false | false | false |
mit
|
['en']
|
['gwkim22/spectro_caption_dataset', 'Chr0my/Epidemic_music']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'diffusion', 'riffusion', 'text-to-audio']
| false | true | true | 403 |
### Introduce
Riffusion with LoRA, fine-tuned with <code>Chr0my/Epidemic_music</code> <br/>
This model was used during Naver Connect BoostCamp AI tech 4th, NLP Track
### Citation
~~~
@article{Forsgren_Martiros_2022,
author = {Forsgren, Seth* and Martiros, Hayk*},
title = {{Riffusion - Stable diffusion for real-time music generation}},
url = {https://riffusion.com/about},
year = {2022}
}
~~~
|
prompthero/openjourney-lora
|
prompthero
| null | 3 | 0 |
diffusers
| 6 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 2,060 |
# Openjourney LoRA - by [PromptHero](https://prompthero.com/?utm_source=huggingface&utm_medium=referral)
These are LoRA adaption weights for [Openjourney](https://huggingface.co/prompthero/openjourney) trained by [@JHawkk](https://prompthero.com/JHawkk)
# Openjourney Links
- [Openjourney Dreambooth](https://huggingface.co/prompthero/openjourney)
- [Openjourney Fine tuned model](https://huggingface.co/prompthero/openjourney-v2)
# Want to learn AI art generation?:
- [Crash course in AI art generation](https://prompthero.com/academy/prompt-engineering-course?utm_source=huggingface&utm_medium=referral)
- [Learn to fine-tune Stable Diffusion for photorealism](https://prompthero.com/academy/dreambooth-stable-diffusion-train-fine-tune-course?utm_source=huggingface&utm_medium=referral)
# How to use LoRA's in auto1111:
- Update webui (use git pull like here or redownload it)
- Copy the file to stable-diffusion-webui/models/lora
- Select your LoRA like in this video
- Make sure to change the weight (by default it's :1 which is usually too high)
# Examples:




|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.