repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
adsjklfsd/distillbert-base-uncased-finetuned-clinc
|
adsjklfsd
|
distilbert
| 32 | 5 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 933 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nlpso/m1_ind_layers_ocr_cmbert_io_level_1
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,437 |
# m1_ind_layers_ocr_cmbert_io_level_1
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ocr_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_io_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_io_level_1")
|
nlpso/m1_ind_layers_ocr_cmbert_io_level_2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,437 |
# m1_ind_layers_ocr_cmbert_io_level_2
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ocr_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_io_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_io_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_io_level_2")
|
nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1
|
nlpso
|
camembert
| 13 | 1 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,449 |
# m1_ind_layers_ocr_cmbert_iob2_level_1
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ocr_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1")
|
nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,449 |
# m1_ind_layers_ocr_cmbert_iob2_level_2
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ocr_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_cmbert_iob2_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_cmbert_iob2_level_2")
|
BeardedJohn/bert-finetuned-ner-per-v6
|
BeardedJohn
|
bert
| 8 | 15 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,507 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v6
This model is a fine-tuned version of [BeardedJohn/bert-ner-wikiann](https://huggingface.co/BeardedJohn/bert-ner-wikiann) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0155
- Validation Loss: 0.0025
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 313, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0155 | 0.0025 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.11.0
|
ZTamas/xlm-roberta-large-squad2_impossible_long_answer
|
ZTamas
|
xlm-roberta
| 9 | 3 |
transformers
| 0 |
question-answering
| true | false | false | null |
['hu']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 873 |
This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
```py
sentencepiece==0.1.97
protobuf==3.20.0
```
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
tokenizer = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 1000 #This can be modified, but to let the model's
#answer be as long as it wants so I
#decided to add a big number
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,462 |
# m1_ind_layers_ocr_ptrn_cmbert_io_level_1
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ocr_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1")
|
nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,462 |
# m1_ind_layers_ocr_ptrn_cmbert_io_level_2
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ocr_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_io_level_2")
|
nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1
|
nlpso
|
camembert
| 13 | 1 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,474 |
# m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1")
|
harish03/my_awesome_qa_model
|
harish03
|
distilbert
| 12 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,260 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 5.5290 |
| No log | 2.0 | 14 | 5.1385 |
| No log | 3.0 | 21 | 4.9699 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2
|
nlpso
|
camembert
| 13 | 1 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,474 |
# m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ocr_ptrn_cmbert_iob2_level_2")
|
cfalholt/A2C-PandaReachDense-v2
|
cfalholt
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nhiro3303/dqn-SpaceInvadersNoFrameskip-v4
|
nhiro3303
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,221 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nhiro3303 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nhiro3303 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nhiro3303
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pfunk/Pong-v4-DQPN_p100_e0.25-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,000 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_e0.25.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100_e0.25]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100_e0.25 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.25-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.25-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.25-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100_e0.25 --start-policy-f 100000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.25,
'exp_name': 'DQPN_p100_e0.25',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Mustafa21/segformer-b0-scene-parse-150
|
Mustafa21
|
segformer
| 6 | 0 |
transformers
| 0 | null | true | false | false |
other
| null |
['scene_parse_150']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 30,876 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9348
- Mean Iou: 0.0598
- Mean Accuracy: 0.1188
- Overall Accuracy: 0.3515
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.579 | 4.0 | 100 | 4.1386 | 0.0333 | 0.0725 | 0.3088 | [0.06580055095662675, 0.34785089422717685, 0.2744065522396124, 0.760051925327976, 0.26374943358340286, 0.04979291813326523, 0.16430853978046717, 0.0, 0.0, 0.013600273200717152, 0.0, 0.030700190985055137, 0.0, 0.08349348457860692, 0.0, 0.0, 0.04156576343954592, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.005223982518437586, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.11262633498765716, 0.6281159732772887, 0.46351564104319254, 0.9583615289740532, 0.49121370266291986, 0.8362829382914048, 0.26300703285753774, 0.0, 0.0, 0.015955209230584324, 0.0, 0.2710168930109308, 0.0, 0.09049112869426064, nan, 0.0, 0.061528197125442434, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.011145104895104896, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 3.0671 | 8.0 | 200 | 3.4485 | 0.0411 | 0.0799 | 0.3471 | [0.04880581365501219, 0.28611369694329464, 0.4226913441260881, 0.7676016467854346, 0.293271350086148, 0.08166525444848727, 0.04463570322098179, 0.0, 0.0, 0.06244885866651618, 0.21411810715682442, 0.09441377915209664, 0.0, 0.1498418266670843, 0.0, 0.0, 0.0029784675920865667, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.07904583173812046, 0.8000120032141941, 0.8110031107156167, 0.9936390634535979, 0.4023961818100718, 0.6110997146746732, 0.04562314666630268, 0.0, 0.0, 0.08867009875603453, 0.29456599713055953, 0.32464392182842, 0.0, 0.18227770933614437, nan, 0.0, 0.0036977982834074026, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 3.3916 | 12.0 | 300 | 3.3288 | 0.0567 | 0.1042 | 0.3551 | [0.06142270314067807, 0.33725784509341555, 0.3419961970758755, 0.8648116255248258, 0.2942643567133751, 0.09493237607073618, 0.1930087092766343, 0.0, 0.0, 0.10225271339643971, 0.534092106649805, 0.10713266043027728, 0.0, 0.34205202312138727, 0.0, 0.0, 0.015607055567007273, 0.0, 0.047514241325737956, 0.06294641401128469, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.12250060481077056, 0.8265780058882434, 0.5577821130049209, 0.9809401724679463, 0.4473348499040992, 0.6154821625048953, 0.21522045500301992, 0.0, 0.0, 0.21108351194887923, 0.9088952654232425, 0.21129513083802584, 0.0, 0.4509315823564525, nan, 0.0, 0.017980120093632323, 0.0, 0.052316464718460444, 0.4256022934009593, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 2.8877 | 16.0 | 400 | 3.2753 | 0.0614 | 0.1245 | 0.3634 | [0.06383115554059329, 0.41857445552298167, 0.29548886933025376, 0.798382492455495, 0.3195998486222483, 0.12722501061600686, 0.3433012182067022, 0.006844884816865569, 0.0, 0.1241607569071209, 0.6107698267751397, 0.0843566938481599, 0.0, 0.4338272183016154, 0.0, 0.0, 0.004662969701591729, 0.00055213944102833, 0.05997392438070404, 0.05208837134317993, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.13404838016408574, 0.7280438428512523, 0.5011142294323895, 0.9498453814902109, 0.5590133369017352, 0.6816478003841635, 0.4322528689349708, 0.006846782794169925, 0.0, 0.2896877065763907, 0.9895982783357246, 0.2871480622722756, 0.0, 0.6589532240242834, nan, 0.0, 0.004839931698160147, 0.00055213944102833, 0.07213114754098361, 0.9262362864546005, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 1.5665 | 20.0 | 500 | 3.0559 | 0.0577 | 0.1128 | 0.3519 | [0.05365286508064105, 0.3282487583877674, 0.28965296783166583, 0.774891012004414, 0.33807146552086864, 0.11450745780497278, 0.35091408318934125, 0.08079401504763664, 0.0, 0.09660168158269318, 0.4879985958137698, 0.050819046580719014, 0.0, 0.3829426906923181, 0.0, 0.0, 0.0, 0.0009134813967527722, 0.10720249359902037, 0.060271926930490195, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.09641153438730907, 0.7782728486183522, 0.4910272206630616, 0.9431805704158995, 0.4874080021410411, 0.626783283292617, 0.39293233647155973, 0.08085318165659439, 0.0, 0.16789527453376335, 0.9972202295552367, 0.12351772109970188, 0.0, 0.6461638111688278, nan, 0.0, 0.0, 0.0009136120247231358, 0.13727726300784035, 0.5740669276145323, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 2.6115 | 24.0 | 600 | 3.1396 | 0.0569 | 0.1188 | 0.3444 | [0.06880446535550425, 0.34934771397202286, 0.23130817748965554, 0.7908774779692913, 0.3357069651180227, 0.1563921844491642, 0.3686594560714634, 0.002374486357367306, 0.006782773841450053, 0.1394629716611868, 0.4630545968647345, 0.033332909573994735, 0.0, 0.43281617301363423, 0.0, 0.0, 0.0, 0.008420522465875243, 0.08884727424053267, 0.05277220184121271, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.13825034703219943, 0.7536062434496349, 0.33853940873586297, 0.9100556933063336, 0.4833203978768009, 0.6467747048840983, 0.47768492388748796, 0.0023746889441877, 0.006862548433755292, 0.2854910758999219, 0.9985652797704447, 0.17369990062934745, 0.0, 0.7015253311657798, nan, 0.0, 0.0, 0.008500564056119611, 0.12173913043478261, 0.8406196592976459, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 3.0416 | 28.0 | 700 | 3.0044 | 0.0591 | 0.1174 | 0.3423 | [0.0646087810352083, 0.347443903913795, 0.21304793478434952, 0.8562226535366417, 0.29428007284969515, 0.23279462934047732, 0.32205680610814, 0.018408918129693707, 0.01120207927225471, 0.11918005261401983, 0.5194932685115932, 0.04446506819543484, 0.0, 0.4104059925322419, 0.0, 0.0, 3.3032735440821854e-05, 0.021106140868731454, 0.13277731442869056, 0.055629952456418386, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.13522981634891593, 0.7995663283168049, 0.3035229424783809, 0.9789203006059616, 0.41110843480975956, 0.5477313839210787, 0.4333891821615885, 0.018421613935300393, 0.011375993749802043, 0.2182548426513892, 0.9965028694404591, 0.25074527989400464, 0.0, 0.682626973341631, nan, 0.0, 3.3924754893645894e-05, 0.021160048937826716, 0.22694226657163222, 0.7740779535806825, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 2.3098 | 32.0 | 800 | 2.9614 | 0.0573 | 0.1178 | 0.3504 | [0.0679411632575971, 0.34119350511810986, 0.28802277057062503, 0.8158180668489442, 0.3105140134290669, 0.16321848801683037, 0.2716467848597101, 0.05297463999606822, 0.0350085910652921, 0.13735145169340862, 0.3878622427551449, 0.026667516627355985, 0.0, 0.3766319671802547, 0.0, 0.0, 0.0, 0.04991220974964983, 0.09784060039596666, 0.07225085437949734, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.1261024584677031, 0.7683968707176033, 0.4829405009088766, 0.9356526535396276, 0.4427512377893751, 0.5758349961770136, 0.31804827113184303, 0.06130821187344472, 0.036139235828837483, 0.24529757016085416, 0.9937230989956959, 0.1247101689301093, 0.0, 0.7159150081918285, nan, 0.0, 0.0, 0.05024866135977247, 0.3029223093371347, 0.6515243398202768, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 2.1938 | 36.0 | 900 | 2.9488 | 0.0594 | 0.1249 | 0.3459 | [0.06585800494107165, 0.3508310644792025, 0.18600196463654223, 0.834197988122546, 0.3318225547307336, 0.20775329584216834, 0.32338765142526765, 0.05470014525835235, 0.06240662110889198, 0.14454642937953152, 0.4382133018496655, 0.05177645159191592, 0.0, 0.40240752473317415, 0.0, 0.0, 0.002041653950179434, 0.0824539269670208, 0.14149933065595716, 0.06395764773215053, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.13367727297282697, 0.7606070069874267, 0.2664244263429697, 0.9760041265650177, 0.4793951558945537, 0.5883482833858605, 0.40272578680306975, 0.056224671169569855, 0.06416482785561198, 0.29432503355301376, 0.9984756097560976, 0.28996356409407087, 0.0, 0.7340767364771328, nan, 0.0, 0.0021938008164557677, 0.08333333333333333, 0.3013542409123307, 0.8132201334141904, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 2.2363 | 40.0 | 1000 | 2.9244 | 0.0611 | 0.1217 | 0.3553 | [0.056973858474943365, 0.3386149490256662, 0.20168694160869144, 0.8556313053906686, 0.35219903676596065, 0.22844034727341697, 0.3070734990873526, 0.05227858746342515, 0.06883014105342651, 0.13722677117596285, 0.49003931269048984, 0.03825335328755053, 0.0, 0.38120286874439696, 0.0, 0.0, 0.002859111618483437, 0.1457338115402583, 0.14061389337641356, 0.054187077643821864, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.10210145318999514, 0.8009733717584375, 0.2924814231714088, 0.9765225007596863, 0.5302538025781703, 0.5952110102008467, 0.38060692093239107, 0.053480270174191255, 0.07024610154460129, 0.2356723623324853, 0.9947991391678622, 0.18118582312023848, 0.0, 0.7344704522651359, nan, 0.0, 0.0031436939534778526, 0.1536576258798481, 0.3101924447612259, 0.642758696730801, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 1.0629 | 44.0 | 1100 | 2.9252 | 0.0603 | 0.1203 | 0.3483 | [0.06053392685715683, 0.33724479251243744, 0.18490830494434662, 0.864882666468844, 0.3382726264831941, 0.22457793773885942, 0.28838753598606137, 0.07841892213667752, 0.052191408149121266, 0.14250617058232015, 0.488736360436466, 0.042854768796133445, 0.0, 0.3778828405815874, 0.0, 0.0, 0.006438019431113192, 0.11459616279388017, 0.13186599142968447, 0.06180231683466225, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.11626342033996237, 0.8124708949841013, 0.2563621359740191, 0.9824518844051081, 0.48286542664704046, 0.5643660369617515, 0.34114306968364083, 0.08429434767152506, 0.05326921249617281, 0.24981470723743515, 0.9960545193687231, 0.20500165617754224, 0.0, 0.7410366155682843, nan, 0.0, 0.0074634460766020965, 0.11642197753308864, 0.2895224518888097, 0.679420034180495, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
| 1.2086 | 48.0 | 1200 | 2.9348 | 0.0598 | 0.1188 | 0.3515 | [0.05626072938568238, 0.33166386618888843, 0.1985468815712577, 0.8753024278922635, 0.3388658681383159, 0.20375021503526577, 0.2883115388488484, 0.06578657425028546, 0.09109874468817603, 0.13745639776313603, 0.4838794004879749, 0.035545947262651485, 0.0, 0.37344809598330725, 0.0, 0.0, 0.006731664298662486, 0.14014690467322732, 0.08247982943856862, 0.06087707649004871, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] | [0.101030409650666, 0.8290208822584715, 0.2764791110502658, 0.9764918579501493, 0.4801837726928052, 0.5521884266079854, 0.3419415600591019, 0.07004621400639886, 0.09358932398619044, 0.2486528715370285, 0.9958751793400287, 0.19334216628022524, 0.0, 0.7273835680810801, nan, 0.0, 0.00776876887064491, 0.1449862560973672, 0.2812544547398432, 0.571145046584707, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0] |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
rahul-t-p/ppo-SnowballTarget
|
rahul-t-p
| null | 20 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 856 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: rahul-t-p/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kurohige/PyraMIDz
|
kurohige
| null | 16 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 827 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: kurohige/PyraMIDz
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
coreml/coreml-waifu-diffusion-v1-4
|
coreml
| null | 4 | 0 | null | 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 2,855 |
# Core ML Converted Model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br>
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
`split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
wd-1-4-anime_e2_split-einsum.zip contains TextEncoder of wd-1-4-anime_e1_split-einsum.zip.<br>

<sub>masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck</sub>
# Waifu Diffusion v1.4
Waifu Diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
- [Waifu Diffusion 1.4 Anime Epoch 1](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.ckpt): A test model made to properly ensure that the training setup works.
- [Waifu Diffusion 1.4 Anime Inference Config](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by Stability AI and NovelAI.
- [Haru](https://github.com/harubaru)
- [Salt](https://github.com/sALTaccount/)
- [Cafe](https://twitter.com/cafeai_labs)
In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
[](https://discord.gg/touhouai)
|
nlpso/m2_joint_label_ref_cmbert_io
|
nlpso
|
camembert
| 13 | 7 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,118 |
# m2_joint_label_ref_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ref_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ref_cmbert_io")
|
nlpso/m2_joint_label_ref_cmbert_iob2
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,126 |
# m2_joint_label_ref_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ref_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ref_cmbert_iob2")
|
nlpso/m2_joint_label_ref_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,133 |
# m2_joint_label_ref_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ref_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ref_ptrn_cmbert_io")
|
nlpso/m2_joint_label_ref_ptrn_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,141 |
# m2_joint_label_ref_ptrn_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ref_ptrn_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ref_ptrn_cmbert_iob2")
|
nlpso/m2_joint_label_ocr_cmbert_io
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,170 |
# m2_joint_label_ocr_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_io")
|
rahul-t-p/ppo-Pyramids
|
rahul-t-p
| null | 14 | 4 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 832 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: rahul-t-p/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Il8/dqn-SpaceInvadersNoFrameskip-v4
|
Il8
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,202 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Il8 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Il8 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Il8
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
nlpso/m2_joint_label_ocr_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,178 |
# m2_joint_label_ocr_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")
|
nlpso/m2_joint_label_ocr_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,185 |
# m2_joint_label_ocr_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_ptrn_cmbert_io")
|
spatial/Reinforce-Pixelcopter-PLE-v0
|
spatial
| null | 6 | 0 | null | 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,193 |
# m2_joint_label_ocr_ptrn_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_ptrn_cmbert_iob2")
|
ilyaster-rl/dqn-SpaceInvadersNoFrameskip-v4-il
|
ilyaster-rl
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,225 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ilyaster-rl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ilyaster-rl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ilyaster-rl
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.2),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 50000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
plpkpjph/color_extraction_2023_02_09-finetuned-ner
|
plpkpjph
|
bert
| 10 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 963 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# color_extraction_2023_02_09-finetuned-ner
This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
vishnun/kg_model
|
vishnun
|
distilbert
| 12 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,878 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kg_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the custom built dataset from publicaly available sentences dataset in Kaggle dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2587
- Precision: 0.8356
- Recall: 0.8057
- F1: 0.8204
- Accuracy: 0.9170
## Model description
Finetuned model for knowledge graph creation in NLP. The dataset(~20k) was created by creating KG using the spaCy library. The original dataset is available in [kaggle](https://www.kaggle.com/datasets/mfekadu/sentences)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4931 | 1.0 | 957 | 0.3031 | 0.7872 | 0.7592 | 0.7729 | 0.8935 |
| 0.2693 | 2.0 | 1914 | 0.2645 | 0.8345 | 0.7868 | 0.8100 | 0.9110 |
| 0.2142 | 3.0 | 2871 | 0.2602 | 0.8330 | 0.7980 | 0.8152 | 0.9152 |
| 0.1894 | 4.0 | 3828 | 0.2587 | 0.8356 | 0.8057 | 0.8204 | 0.9170 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rowehn/poca-SoccerTwos-2L
|
Rowehn
| null | 21 | 185 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 843 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Rowehn/poca-SoccerTwos-2L
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subitha/roberta-squad
|
Subitha
|
roberta
| 17 | 11 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 930 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-squad
This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rubywong123/ppo-Huggy
|
Rubywong123
| null | 32 | 4 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 1,073 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Rubywong123/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
Visualization method recorded in the official notebook to **play directly in browser:**
1. Go to https://singularite.itch.io/huggy
2. Click on Play with my Huggy model
3. Select model repo and .nn/.onnx file.
4. Click **Play with Huggy**.
|
nlpso/m3_hierarchical_ner_ref_cmbert_io
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,133 |
# m3_hierarchical_ner_ref_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ref_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ref_cmbert_io")
|
nlpso/m3_hierarchical_ner_ref_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,141 |
# m3_hierarchical_ner_ref_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ref_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ref_cmbert_iob2")
|
NikiBase/train.log
|
NikiBase
|
distilbert
| 10 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 948 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train.log
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 1 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,148 |
# m3_hierarchical_ner_ref_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_io")
|
davanstrien/dataset_mentions2
|
davanstrien
|
albert
| 13 | 10 |
sentence-transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,424 |
# davanstrien/dataset_mentions2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("davanstrien/dataset_mentions2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,156 |
# m3_hierarchical_ner_ref_ptrn_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ref_ptrn_cmbert_iob2")
|
nlpso/m3_hierarchical_ner_ocr_cmbert_io
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,185 |
# m3_hierarchical_ner_ocr_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ocr_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ocr_cmbert_io")
|
nikogarro/PPO-Huggy
|
nikogarro
| null | 28 | 1 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 820 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: nikogarro/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nlpso/m3_hierarchical_ner_ocr_cmbert_iob2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,193 |
# m3_hierarchical_ner_ocr_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ocr_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ocr_cmbert_iob2")
|
nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 1 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,200 |
# m3_hierarchical_ner_ocr_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_io")
|
nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 1 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,208 |
# m3_hierarchical_ner_ocr_ptrn_cmbert_iob2
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IOB2
* Recognised entities : 'All'
## Load model from the Hugging Face
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m3_hierarchical_ner_ocr_ptrn_cmbert_iob2")
|
hectorjelly/Keano2
|
hectorjelly
| null | 22 | 179 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 836 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Keano2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
schreon/gpt2-lhm-large-05
|
schreon
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false | null | null |
['training_corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 965 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-lhm-large-05
This model was trained from scratch on the training_corpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Galiess/a2c-PandaReachDense-v2
|
Galiess
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
seungwoos/ppo-lunar-lander-v2
|
seungwoos
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
juliensimon/xlm-v-base-language-id
|
juliensimon
|
xlm-roberta
| 10 | 84 |
transformers
| 1 |
text-classification
| true | false | false |
mit
| null |
['fleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'language-identification', 'openvino']
| true | true | true | 2,951 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-v-base-language-id
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the [google/fleurs](https://huggingface.co/datasets/google/fleurs) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0241
- Accuracy: 0.9930
# Usage
The simplest way to use the model is with a text classification pipeline:
```
from transformers import pipeline
model_id = "juliensimon/xlm-v-base-language-id"
p = pipeline("text-classification", model=model_id)
p("Hello world")
# [{'label': 'English', 'score': 0.9802148342132568}]
```
The model is also compatible with [Optimum Intel](https://github.com/huggingface/optimum-intel).
For example, you can optimize it with Intel OpenVINO and enjoy a 2x inference speedup (or more).
```
from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
pipeline)
model_id = "juliensimon/xlm-v-base-language-id"
ov_model = OVModelForSequenceClassification.from_pretrained(
model_id, from_transformers=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
p = pipeline("text-classification", model=ov_model, tokenizer=tokenizer)
p("Hello world")
# [{'label': 'English', 'score': 0.9802149534225464}]
```
## Intended uses & limitations
The model can accurately detect 102 languages. You can find the list on the [dataset](https://huggingface.co/datasets/google/fleurs) page.
## Training and evaluation data
The model has been trained and evaluated on the complete google/fleurs training and validation sets.
## Training procedure
The training script is included in the repository. The model has been trained on an p3dn.24xlarge instance on AWS (8 NVIDIA V100 GPUs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6368 | 1.0 | 531 | 0.4593 | 0.9689 |
| 0.059 | 2.0 | 1062 | 0.0412 | 0.9899 |
| 0.0311 | 3.0 | 1593 | 0.0275 | 0.9918 |
| 0.0255 | 4.0 | 2124 | 0.0243 | 0.9928 |
| 0.017 | 5.0 | 2655 | 0.0241 | 0.9930 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
mshibatatt/dqn-SpaceInvadersNoFrameskip-v4
|
mshibatatt
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,219 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mshibatatt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mshibatatt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mshibatatt
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 500),
('train_freq', 4),
('normalize', False)])
```
|
nlpso/m0_flat_ner_ref_cmbert_io
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 986 |
# m0_flat_ner_ref_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : All (flat entities)
## Load model from the HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m0_flat_ner_ref_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m0_flat_ner_ref_cmbert_io")
|
gubartz/flan-t5-base2
|
gubartz
|
t5
| 15 | 29 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,053 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base2
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0442
- Rouge1: 18.1369
- Rouge2: 17.0674
- Rougel: 18.1514
- Rougelsum: 18.16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 208 | 0.0576 | 18.1295 | 17.0545 | 18.1408 | 18.1542 |
| No log | 2.0 | 416 | 0.0506 | 18.1328 | 17.0598 | 18.1455 | 18.1561 |
| No log | 3.0 | 624 | 0.0486 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
| No log | 4.0 | 832 | 0.0470 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
| No log | 5.0 | 1040 | 0.0442 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
| No log | 6.0 | 1248 | 0.0456 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
| No log | 7.0 | 1456 | 0.0450 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
| No log | 8.0 | 1664 | 0.0442 | 18.1369 | 17.0674 | 18.1514 | 18.16 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
GCAd/my_awesome_model
|
GCAd
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,380 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2345
- Accuracy: 0.9319
From the [text classification tutorial](https://huggingface.co/docs/transformers/tasks/sequence_classification)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2342 | 1.0 | 1563 | 0.1912 | 0.9270 |
| 0.1511 | 2.0 | 3126 | 0.2345 | 0.9319 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/massive_social-roberta-large-v1-2-7
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,460 |
# fathyshalab/massive_social-roberta-large-v1-2-7
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-2-7")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
yujiepan/bert-base-uncased-sst2-int8-unstructured80-17epoch
|
yujiepan
|
bert
| 21 | 26 |
transformers
| 0 | null | true | false | false | null |
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,435 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Joint magnitude pruning, quantization and distillation on BERT-base/SST-2
This model conducts unstructured magnitude pruning, quantization and distillation at the same time on BERT-base when finetuning on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Torch loss: 0.3858
- Torch accuracy: 0.9128
- OpenVINO IR accuracy: 0.9128
- Sparsity in transformer block linear layers: 0.80
## Setup
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
git clone https://github.com/yujiepan-work/optimum-intel.git
git checkout -b "magnitude-pruning" 01927af543eaea8678671bf8f4eb78fdb29f8930
cd optimum-intel
pip install -e .[openvino,nncf]
cd examples/openvino/text-classification/
pip install -r requirements.txt
pip install wandb # optional
```
## NNCF config
See `nncf_config.json` in this repo.
## Run
We use one card for training.
```
NNCFCFG=/path/to/nncf/config
python run_glue.py \
--lr_scheduler_type cosine_with_restarts \
--cosine_cycle_ratios 11,6 \
--cosine_cycle_decays 1,1 \
--save_best_model_after_epoch -1 \
--save_best_model_after_sparsity 0.7999 \
--model_name_or_path textattack/bert-base-uncased-SST-2 \
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
--distillation_temperature 2 \
--task_name sst2 \
--nncf_compression_config $NNCFCFG \
--distillation_weight 0.95 \
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80-17epoch \
--run_name bert-base-uncased-sst2-int8-unstructured80-17epoch \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 5e-05 \
--optim adamw_torch \
--num_train_epochs 17 \
--logging_steps 1 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_strategy steps \
--save_steps 250 \
--save_total_limit 1 \
--fp16 \
--seed 1
```
The best model checkpoint is stored in the `best_model` folder. Here we only upload that checkpoint folder together with some config files.
## inference
https://gist.github.com/yujiepan-work/c38dc4e56c7a9d803c42988f7b7d260a
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
|
guydegnol/ppo-CartPole-v1
|
guydegnol
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 344 |
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NTCAL/SavedAfterTrainingTest40
|
NTCAL
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,051 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SavedAfterTrainingTest40
This model is a fine-tuned version of [ltgoslo/norbert2](https://huggingface.co/ltgoslo/norbert2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fathyshalab/massive_transport-roberta-large-v1-2-3
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,466 |
# fathyshalab/massive_transport-roberta-large-v1-2-3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-2-3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nlpso/m0_flat_ner_ref_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,001 |
# m0_flat_ner_ref_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : All (flat entities)
## Load model from the HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m0_flat_ner_ref_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m0_flat_ner_ref_ptrn_cmbert_io")
|
logoyazilim/crnn_vgg16_bn_20230209-143722
|
logoyazilim
| null | 4 | 2 |
transformers
| 0 | null | true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,621 |
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "doctr-train-aspect-ratio",
"val_path": null,
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 10,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 4,
"resume": null,
"vocab": "turkish",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": true,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
nlpso/m0_flat_ner_ocr_cmbert_io
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,038 |
# m0_flat_ner_ocr_cmbert_io
## Introduction
This model is a fine-tuned verion from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : All (flat entities)
## Load model from the HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m0_flat_ner_ocr_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m0_flat_ner_ocr_cmbert_io")
|
nlpso/m0_flat_ner_ocr_ptrn_cmbert_io
|
nlpso
|
camembert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,053 |
# m0_flat_ner_ocr_ptrn_cmbert_io
## Introduction
This model is a fine-tuned verion from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : noisy (Pero OCR)
* Tagging format : IO
* Recognised entities : All (flat entities)
## Load model from the HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m0_flat_ner_ocr_ptrn_cmbert_io")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m0_flat_ner_ocr_ptrn_cmbert_io")
|
guydegnol/ppo-CarRacing-v0
|
guydegnol
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CarRacing-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 346 |
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fathyshalab/massive_calendar-roberta-large-v1-2-93
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,466 |
# fathyshalab/massive_calendar-roberta-large-v1-2-93
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-2-93")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
fathyshalab/massive_play-roberta-large-v1-2-71
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,458 |
# fathyshalab/massive_play-roberta-large-v1-2-71
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-2-71")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
adsjklfsd/distillbert-bert-clinc-bert-optuna
|
adsjklfsd
|
distilbert
| 10 | 8 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,787 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-bert-clinc-bert-optuna
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0999
- Accuracy: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.5777 | 0.7348 |
| 0.7588 | 2.0 | 636 | 0.2863 | 0.8848 |
| 0.7588 | 3.0 | 954 | 0.1794 | 0.9216 |
| 0.2787 | 4.0 | 1272 | 0.1386 | 0.93 |
| 0.1598 | 5.0 | 1590 | 0.1208 | 0.9355 |
| 0.1598 | 6.0 | 1908 | 0.1111 | 0.9403 |
| 0.1245 | 7.0 | 2226 | 0.1057 | 0.9397 |
| 0.1096 | 8.0 | 2544 | 0.1023 | 0.9410 |
| 0.1096 | 9.0 | 2862 | 0.1005 | 0.9410 |
| 0.1034 | 10.0 | 3180 | 0.0999 | 0.9403 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
plpkpjph/color_extraction_2023_02_09_v2-finetuned-ner
|
plpkpjph
|
distilbert
| 10 | 33 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 966 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# color_extraction_2023_02_09_v2-finetuned-ner
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Ojimi/waifumake-full
|
Ojimi
| null | 25 | 21 |
diffusers
| 1 |
text-to-image
| false | false | false |
agpl-3.0
|
['en', 'vi']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['art']
| false | true | true | 4,400 |
# Waifumake (●'◡'●) AI Art model.

A single student training an AI model that generates art.
## **New model avalable** : [waifumake-full-v2](waifumake-full-v2.safetensors)!
## What's new in v2:
- Fix color loss.
- Increase image quality.
## Introduction:
- It's an AI art model for converting text to images, images to images, inpainting, and outpainting using Stable Diffusion.
- The AI art model is developed with a focus on the ability to draw anime characters relatively well through fine-tuning using Dreambooth.
- It can be used as a tool for upscaling or rendering anime-style images from 3D modeling software (Blender).
- Create an image from a sketch you created from a pure drawing program. (MS Paint)
- The model is aimed at everyone and has limitless usage potential.
## Used:
- For 🧨 Diffusers Library:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("Ojimi/waifumake-full")
pipe = pipe.to("cuda")
prompt = "1girl, animal ears, long hair, solo, cat ears, choker, bare shoulders, red eyes, fang, looking at viewer, animal ear fluff, upper body, black hair, blush, closed mouth, off shoulder, bangs, bow, collarbone"
image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0]
```
- For Web UI by Automatic1111:
```bash
#Install Web UI.
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd /content/stable-diffusion-webui/
pip install -qq -r requirements.txt
pip install -U xformers #Install `xformes` for better performance.
```
```bash
#Download model.
wget https://huggingface.co/Ojimi/waifumake-full/resolve/main/waifumake-full-v2.safetensors -O /content/stable-diffusion-webui/models/Stable-diffusion/waifumake-full-v2.safetensors
```
```bash
#Run and enjoy ☕.
cd /content/stable-diffusion-webui
python launch.py --xformers
```
- Try it in Google Colab [](https://colab.research.google.com/drive/1D6LNtXrpD2QfUx-d_yztWZVgTiDAyyAT?usp=sharing)
## Tips:
- The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now.
- The CGF scale should be 7.5 and the step count 28 for the best quality and best performance.
- Use a sample photo for your idea. `Interrogate DeepBooru` and change the prompts to suit what you want.
- You should use it as a supportive tool for creating works of art, and not rely on it completely.
## Preview: v2 model




- Enchance and Upscale using Stable Diffusion - Waifumake model v2 (⚠️Performance warning): I recommend against making the image too big as it can lead to unexpected problems.

## Training:
- **Data**: The model is trained based on a database of various sources from the Internet provided by my friend and images created by another AI.
- **Schedule**: Euler Ancestral Discrete.
- **Optimizer**: AdamW.
- **Precision**: BF16.
- **Hardware**: Google Colaboratory Pro - NVIDIA A100 40GB VRAM.
## **Limitations:**
- Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable.
- Complex tasks cannot be handled.
- ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt.
- The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs.
- The model is trained on the `Danbooru` and `Nai` tagging system, so the long text may result in poor results.
- My amount of money: 0 USD =((.

## **Desires:**
As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected.
Want to support me? Thank you, please help me make it better. ❤️
## Special Thank:
This wouldn't have happened if they hadn't made a breakthrough.
- [Runwayml](https://huggingface.co/runwayml/): Base model.
- [d8ahazard](https://github.com/d8ahazard/sd_dreambooth_extension) : Dreambooth.
- [Automatic1111](https://github.com/AUTOMATIC1111/) : Web UI.
- [Mikubill](https://github.com/Mikubill/): Where my ideas started.
- Chat-GPT: Help me do crazy things that I thought I would never do.
- Novel AI: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute.
- Danbooru: Help me write the correct tag.
- My friend and others.
- And You 🫵❤️
## Copyright:
This license allows anyone to copy, modify, publish, and commercialize the model, but please follow the terms of the GNU General Public License. You can learn more about the GNU General Public License at [here](LICENSE.txt).
If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid.
All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause.
Don't forget me.
# Have fun with your waifu! (●'◡'●)

Like it?
|
MariaK/layoutlmv2-base-uncased_finetuned_docvqa_v2
|
MariaK
|
layoutlmv2
| 13 | 21 |
transformers
| 0 |
document-question-answering
| true | false | false |
cc-by-nc-sa-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 973 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa_v2
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-linelevel-ml384
|
pierreguillou
|
lilt
| 32 | 40 |
transformers
| 1 |
token-classification
| true | false | false |
mit
|
['multilingual', 'en', 'de', 'fr', 'ja']
|
['pierreguillou/DocLayNet-base']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['object-detection', 'vision', 'generated_from_trainer', 'DocLayNet', 'COCO', 'PDF', 'IBM', 'Financial-Reports', 'Finance', 'Manuals', 'Scientific-Articles', 'Science', 'Laws', 'Law', 'Regulations', 'Patents', 'Government-Tenders', 'object-detection', 'image-segmentation', 'token-classification']
| false | true | true | 6,045 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Document Understanding model (at line level)
This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) with the [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0003
- Precision: 0.8584
- Recall: 0.8584
- F1: 0.8584
- Accuracy: 0.8584
## References
### Other model
- [Document Understanding model (at paragraph level)](https://huggingface.co/pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-paragraphlevel-ml512)
### Blog posts
- (02/16/2023) [Document AI | Inference APP and fine-tuning notebook for Document Understanding at paragraph level](https://medium.com/@pierre_guillou/document-ai-inference-app-and-fine-tuning-notebook-for-document-understanding-at-paragraph-level-c18d16e53cf8)
- (02/14/2023) [Document AI | Inference APP for Document Understanding at line level](https://medium.com/@pierre_guillou/document-ai-inference-app-for-document-understanding-at-line-level-a35bbfa98893)
- (02/10/2023) [Document AI | Document Understanding model at line level with LiLT, Tesseract and DocLayNet dataset](https://medium.com/@pierre_guillou/document-ai-document-understanding-model-at-line-level-with-lilt-tesseract-and-doclaynet-dataset-347107a643b8)
- (01/31/2023) [Document AI | DocLayNet image viewer APP](https://medium.com/@pierre_guillou/document-ai-doclaynet-image-viewer-app-3ac54c19956)
- (01/27/2023) [Document AI | Processing of DocLayNet dataset to be used by layout models of the Hugging Face hub (finetuning, inference)](https://medium.com/@pierre_guillou/document-ai-processing-of-doclaynet-dataset-to-be-used-by-layout-models-of-the-hugging-face-hub-308d8bd81cdb)
### Notebooks (paragraph level)
- [Document AI | Inference APP at paragraph level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/Gradio_inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levelparagraphs_ml512.ipynb)
- [Document AI | Inference at paragraph level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levelparagraphs_ml512.ipynb)
- [Document AI | Fine-tune LiLT on DocLayNet base in any language at paragraph level (chunk of 512 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_paragraphlevel_ml_512.ipynb)
### Notebooks (line level)
- [Document AI | Inference at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Inference APP at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/Gradio_inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
- [Document AI | Fine-tune LiLT on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb)
- [DocLayNet image viewer APP](https://github.com/piegu/language-models/blob/master/DocLayNet_image_viewer_APP.ipynb)
- [Processing of DocLayNet dataset to be used by layout models of the Hugging Face hub (finetuning, inference)](processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb)
### APP
You can test this model with this APP in Hugging Face Spaces: [Inference APP for Document Understanding at line level (v1)](https://huggingface.co/spaces/pierreguillou/Inference-APP-Document-Understanding-at-linelevel-v1).

### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
## Model description
The model was finetuned at **line level on chunk of 384 tokens with overlap of 128 tokens**. Thus, the model was trained with all layout and text data of all pages of the dataset.
At inference time, a calculation of best probabilities give the label to each line bounding boxes.
## Inference
See notebook: [Document AI | Inference at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb)
## Training and evaluation data
See notebook: [Document AI | Fine-tune LiLT on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7223 | 0.21 | 500 | 0.7765 | 0.7741 | 0.7741 | 0.7741 | 0.7741 |
| 0.4469 | 0.42 | 1000 | 0.5914 | 0.8312 | 0.8312 | 0.8312 | 0.8312 |
| 0.3819 | 0.62 | 1500 | 0.8745 | 0.8102 | 0.8102 | 0.8102 | 0.8102 |
| 0.3361 | 0.83 | 2000 | 0.6991 | 0.8337 | 0.8337 | 0.8337 | 0.8337 |
| 0.2784 | 1.04 | 2500 | 0.7513 | 0.8119 | 0.8119 | 0.8119 | 0.8119 |
| 0.2377 | 1.25 | 3000 | 0.9048 | 0.8166 | 0.8166 | 0.8166 | 0.8166 |
| 0.2401 | 1.45 | 3500 | 1.2411 | 0.7939 | 0.7939 | 0.7939 | 0.7939 |
| 0.2054 | 1.66 | 4000 | 1.1594 | 0.8080 | 0.8080 | 0.8080 | 0.8080 |
| 0.1909 | 1.87 | 4500 | 0.7545 | 0.8425 | 0.8425 | 0.8425 | 0.8425 |
| 0.1704 | 2.08 | 5000 | 0.8567 | 0.8318 | 0.8318 | 0.8318 | 0.8318 |
| 0.1294 | 2.29 | 5500 | 0.8486 | 0.8489 | 0.8489 | 0.8489 | 0.8489 |
| 0.134 | 2.49 | 6000 | 0.7682 | 0.8573 | 0.8573 | 0.8573 | 0.8573 |
| 0.1354 | 2.7 | 6500 | 0.9871 | 0.8256 | 0.8256 | 0.8256 | 0.8256 |
| 0.1239 | 2.91 | 7000 | 1.1430 | 0.8189 | 0.8189 | 0.8189 | 0.8189 |
| 0.1012 | 3.12 | 7500 | 0.8272 | 0.8386 | 0.8386 | 0.8386 | 0.8386 |
| 0.0788 | 3.32 | 8000 | 1.0288 | 0.8365 | 0.8365 | 0.8365 | 0.8365 |
| 0.0802 | 3.53 | 8500 | 0.7197 | 0.8849 | 0.8849 | 0.8849 | 0.8849 |
| 0.0861 | 3.74 | 9000 | 1.1420 | 0.8320 | 0.8320 | 0.8320 | 0.8320 |
| 0.0639 | 3.95 | 9500 | 0.9563 | 0.8585 | 0.8585 | 0.8585 | 0.8585 |
| 0.0464 | 4.15 | 10000 | 1.0768 | 0.8511 | 0.8511 | 0.8511 | 0.8511 |
| 0.0412 | 4.36 | 10500 | 1.1184 | 0.8439 | 0.8439 | 0.8439 | 0.8439 |
| 0.039 | 4.57 | 11000 | 0.9634 | 0.8636 | 0.8636 | 0.8636 | 0.8636 |
| 0.0469 | 4.78 | 11500 | 0.9585 | 0.8634 | 0.8634 | 0.8634 | 0.8634 |
| 0.0395 | 4.99 | 12000 | 1.0003 | 0.8584 | 0.8584 | 0.8584 | 0.8584 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
newbie4000/poca-SoccerTwos
|
newbie4000
| null | 25 | 175 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: newbie4000/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thonyyy/thonyyy-BERT
|
thonyyy
|
bert
| 8 | 10 |
transformers
| 0 | null | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,097 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thonyyy-BERT
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.4325
- Validation Loss: 8.5317
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4325 | 8.5317 | 0 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
manuelanayantarajeyaraj/taro_model_query_res_multilingual_agent_msg
|
manuelanayantarajeyaraj
|
distilbert
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,310 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# taro_model_query_res_multilingual_agent_msg
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5844
- Accuracy: 0.7337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 79 | 0.6398 | 0.6145 |
| No log | 2.0 | 158 | 0.5844 | 0.7337 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BeardedJohn/bert-finetuned-ner-per-v7
|
BeardedJohn
|
bert
| 8 | 12 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,252 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v7
This model is a fine-tuned version of [BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2](https://huggingface.co/BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 313, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.11.0
|
mrigendraagrawal/dqn-SpaceInvadersNoFrameskip-v4
|
mrigendraagrawal
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,241 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrigendraagrawal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrigendraagrawal -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrigendraagrawal
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
bonadio/a2c-AntBulletEnv-v0
|
bonadio
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HasinMDG/SetFit_Labse_IPTC_Classifier_V2
|
HasinMDG
|
bert
| 15 | 20 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,446 |
# HasinMDG/SetFit_Labse_IPTC_Classifier_V2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/SetFit_Labse_IPTC_Classifier_V2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
iubeda/dqn-SpaceInvadersNoFrameskip-v4
|
iubeda
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,211 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga iubeda -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga iubeda -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga iubeda
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
MoritzLaurer/xlm-v-base-mnli-xnli
|
MoritzLaurer
|
xlm-roberta
| 8 | 53 |
transformers
| 13 |
zero-shot-classification
| true | false | false |
mit
|
['multilingual', 'en', 'ar', 'bg', 'de', 'el', 'es', 'fr', 'hi', 'ru', 'sw', 'th', 'tr', 'ur', 'vi', 'zh']
|
['multi_nli', 'xnli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
| false | true | true | 6,703 |
---
# Multilingual XLM-V-base-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 116 languages and is therefore also
suitable for multilingual zero-shot classification. The underlying XLM-V-base model was created
by Meta AI and pretrained on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100).
It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
XLM-V-base was publish on 23.01.2023 in [this paper](https://arxiv.org/pdf/2301.10472.pdf).
Its main innovation is a larger and better vocabulary: previous multilingual models had a vocabulary of 250 000 tokens,
while XLM-V 'knows' 1 million tokens. The improved vocabulary allows for better representations of more languages.
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/xlm-v-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/xlm-v-base-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained on the professional translations
from the XNLI development set and the original English MNLI training set (392 702 texts).
Not using machine translated texts can avoid overfitting the model to the 15 languages;
avoids catastrophic forgetting of the other 101~ languages XLM-V was pre-trained on;
and significantly reduces training costs.
### Training procedure
xlm-v-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=120, # batch size for evaluation
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
the other 101~ languages XLM-V was training on, but performance is most likely lower than for those languages available in XNLI.
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English,
the authors have most likely made a mistake during testing since non of the latest papers (of mostly larger models) shows a multilingual average performance
of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
The average XNLI performance of XLM-V reported in the paper is 0.76 ([see table 2](https://arxiv.org/pdf/2301.10472.pdf)).
This reimplementation has an average performance of 0.78.
This increase in performance is probably thanks to the addition of MNLI in the training data.
Note that [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) has an average
performance of 0.808 and is smaller (3GB for XLM-V vs. 560MB for mDeBERTa) and is faster (thanks to mDeBERTa's smaller vocabulary).
This difference comes probably from mDeBERTa-v3's improved pre-training objective.
Depending on the task, it is probably better to use [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli),
but XLM-V could be better on some languages based on its improved vocabulary.
|Datasets|average|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
|Accuracy|0.780|0.757|0.808|0.796|0.79|0.856|0.814|0.806|0.751|0.782|0.725|0.757|0.766|0.729|0.784|0.782|
|Speed GPU A100 (text/sec)|na|3501.0|3324.0|3438.0|3174.0|3713.0|3500.0|3129.0|3042.0|3419.0|3468.0|3782.0|3772.0|3099.0|3117.0|4217.0|
|Datasets|mnli_m (en)|mnli_mm (en)|
| :---: | :---: | :---: |
|Accuracy|0.852|0.854|
|Speed GPU A100 (text/sec)|2098.0|2170.0|
## Limitations and bias
Please consult the original XLM-V paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’.
Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
figfig/restaurant_local_test_model
|
figfig
|
whisper
| 14 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,907 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
yujiepan/bert-base-uncased-sst2-int8-unstructured80-30epoch
|
yujiepan
|
bert
| 20 | 14 |
transformers
| 0 | null | true | false | false | null |
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,437 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Joint magnitude pruning, quantization and distillation on BERT-base/SST-2
This model conducts unstructured magnitude pruning, quantization and distillation at the same time when finetuning on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Torch loss: 0.4116
- Torch accuracy: 0.9140
- OpenVINO IR accuracy: 0.9106
- Sparsity in transformer block linear layers: 0.80
## Setup
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
git clone https://github.com/yujiepan-work/optimum-intel.git
git checkout -b "magnitude-pruning" 01927af543eaea8678671bf8f4eb78fdb29f8930
cd optimum-intel
pip install -e .[openvino,nncf]
cd examples/openvino/text-classification/
pip install -r requirements.txt
pip install wandb # optional
```
## NNCF config
See `nncf_config.json` in this repo.
## Run
We use one card for training.
```
NNCFCFG=/path/to/nncf/config
python run_glue.py \
--lr_scheduler_type cosine_with_restarts \
--cosine_cycle_ratios 8,6,4,4,4,4 \
--cosine_cycle_decays 1,1,1,1,1,1 \
--save_best_model_after_epoch -1 \
--save_best_model_after_sparsity 0.7999 \
--model_name_or_path textattack/bert-base-uncased-SST-2 \
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
--distillation_temperature 2 \
--task_name sst2 \
--nncf_compression_config $NNCFCFG \
--distillation_weight 0.95 \
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80-30epoch \
--run_name bert-base-uncased-sst2-int8-unstructured80-30epoch \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 5e-05 \
--optim adamw_torch \
--num_train_epochs 30 \
--logging_steps 1 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_strategy steps \
--save_steps 250 \
--save_total_limit 1 \
--fp16 \
--seed 1
```
The best model checkpoint is stored in the `best_model` folder. Here we only upload that checkpoint folder together with some config files.
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
|
bonadio/a2c-PandaReachDense-v2
|
bonadio
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
enankobh1/whisper-tiny-ASR
|
enankobh1
|
whisper
| 18 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,451 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ASR
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8363
- Wer: 60.1936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1494 | 6.8 | 1000 | 0.4732 | 50.4342 |
| 0.0113 | 13.61 | 2000 | 0.6944 | 62.9461 |
| 0.0024 | 20.41 | 3000 | 0.8042 | 59.8032 |
| 0.0014 | 27.21 | 4000 | 0.8363 | 60.1936 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DataIntelligenceTeam/Bol-2.0
|
DataIntelligenceTeam
|
layoutlmv3
| 12 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-nc-sa-4.0
| null |
['sroie']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,479 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bol-2.0
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0505
- Precision: 0.4410
- Recall: 0.6611
- F1: 0.5291
- Accuracy: 0.8057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 100 | 0.8280 | 0.2917 | 0.3095 | 0.3003 | 0.7728 |
| No log | 3.12 | 200 | 0.7993 | 0.3471 | 0.4947 | 0.4080 | 0.7345 |
| No log | 4.69 | 300 | 0.7697 | 0.3079 | 0.4168 | 0.3542 | 0.7439 |
| No log | 6.25 | 400 | 0.5986 | 0.4172 | 0.5937 | 0.4900 | 0.8229 |
| 0.4632 | 7.81 | 500 | 0.7033 | 0.4102 | 0.5768 | 0.4794 | 0.8186 |
| 0.4632 | 9.38 | 600 | 0.7623 | 0.4448 | 0.6442 | 0.5262 | 0.8186 |
| 0.4632 | 10.94 | 700 | 0.7781 | 0.4052 | 0.5937 | 0.4816 | 0.8100 |
| 0.4632 | 12.5 | 800 | 0.8595 | 0.3816 | 0.6105 | 0.4696 | 0.7970 |
| 0.4632 | 14.06 | 900 | 1.0559 | 0.3531 | 0.5768 | 0.4380 | 0.7528 |
| 0.0077 | 15.62 | 1000 | 0.8237 | 0.4245 | 0.5684 | 0.4860 | 0.8143 |
| 0.0077 | 17.19 | 1100 | 0.9775 | 0.4064 | 0.5853 | 0.4797 | 0.7906 |
| 0.0077 | 18.75 | 1200 | 1.0163 | 0.4088 | 0.5853 | 0.4814 | 0.7852 |
| 0.0077 | 20.31 | 1300 | 1.0211 | 0.4029 | 0.5768 | 0.4745 | 0.7841 |
| 0.0077 | 21.88 | 1400 | 0.9575 | 0.4330 | 0.6526 | 0.5206 | 0.8067 |
| 0.0028 | 23.44 | 1500 | 0.9099 | 0.4212 | 0.6526 | 0.5120 | 0.8089 |
| 0.0028 | 25.0 | 1600 | 0.7388 | 0.5439 | 0.6779 | 0.6036 | 0.8693 |
| 0.0028 | 26.56 | 1700 | 0.9506 | 0.4423 | 0.6779 | 0.5353 | 0.8111 |
| 0.0028 | 28.12 | 1800 | 0.9932 | 0.4312 | 0.6863 | 0.5297 | 0.7927 |
| 0.0028 | 29.69 | 1900 | 1.0093 | 0.4368 | 0.6695 | 0.5287 | 0.7981 |
| 0.0034 | 31.25 | 2000 | 0.8265 | 0.4595 | 0.6695 | 0.5450 | 0.8164 |
| 0.0034 | 32.81 | 2100 | 0.7445 | 0.4618 | 0.6611 | 0.5437 | 0.8466 |
| 0.0034 | 34.38 | 2200 | 0.9809 | 0.4417 | 0.6695 | 0.5322 | 0.7830 |
| 0.0034 | 35.94 | 2300 | 1.0108 | 0.4503 | 0.6863 | 0.5438 | 0.8057 |
| 0.0034 | 37.5 | 2400 | 1.0101 | 0.4503 | 0.6863 | 0.5438 | 0.8035 |
| 0.0013 | 39.06 | 2500 | 0.8796 | 0.4649 | 0.6695 | 0.5487 | 0.8164 |
| 0.0013 | 40.62 | 2600 | 0.9220 | 0.4738 | 0.6863 | 0.5606 | 0.8208 |
| 0.0013 | 42.19 | 2700 | 1.2670 | 0.4128 | 0.6779 | 0.5131 | 0.7841 |
| 0.0013 | 43.75 | 2800 | 1.1287 | 0.4351 | 0.6779 | 0.5300 | 0.8003 |
| 0.0013 | 45.31 | 2900 | 1.1269 | 0.4375 | 0.6779 | 0.5318 | 0.8013 |
| 0.001 | 46.88 | 3000 | 1.1108 | 0.4375 | 0.6779 | 0.5318 | 0.8024 |
| 0.001 | 48.44 | 3100 | 1.1301 | 0.4321 | 0.6695 | 0.5252 | 0.7981 |
| 0.001 | 50.0 | 3200 | 1.0891 | 0.4522 | 0.6779 | 0.5425 | 0.8035 |
| 0.001 | 51.56 | 3300 | 1.0414 | 0.4410 | 0.6611 | 0.5291 | 0.8067 |
| 0.001 | 53.12 | 3400 | 1.0495 | 0.4410 | 0.6611 | 0.5291 | 0.8057 |
| 0.0009 | 54.69 | 3500 | 1.0505 | 0.4410 | 0.6611 | 0.5291 | 0.8057 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.2.2
- Tokenizers 0.13.2
|
yujiepan/bert-base-uncased-sst2-PTQ
|
yujiepan
|
bert
| 19 | 11 |
transformers
| 0 | null | true | false | false | null |
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,096 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-PTQ
This model conducts simple post training quantization of [textattack/bert-base-uncased-SST-2](https://huggingface.co/textattack/bert-base-uncased-SST-2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- torch loss: 0.2140
- torch accuracy: 0.9243
- OpenVINO IR accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ThatGuyVanquish/mt5-small-finetuned-rabbi-kook-nave-4
|
ThatGuyVanquish
|
mt5
| 11 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,196 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-rabbi-kook-nave-4
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 1784 | nan |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
|
ThatGuyVanquish/mt5-base-finetuned-rabbi-kook-nave-4
|
ThatGuyVanquish
|
mt5
| 11 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,397 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-rabbi-kook-nave-4
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 1784 | nan |
| 0.0 | 2.0 | 3568 | nan |
| 0.0 | 3.0 | 5352 | nan |
| 0.0 | 4.0 | 7136 | nan |
| 0.0 | 5.0 | 8920 | nan |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
|
verderis/ppo-LunarLander-v2-exp0
|
verderis
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rohitp1/distil_noisy_teacher
|
rohitp1
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,464 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_noisy_teacher
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4992
- Wer: 0.0478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 32.3439 | 22.22 | 1000 | 8.8146 | 0.1150 |
| 16.0955 | 44.44 | 2000 | 7.5152 | 0.0890 |
| 8.5751 | 66.67 | 3000 | 6.0369 | 0.0602 |
| 4.3989 | 88.89 | 4000 | 5.4992 | 0.0478 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.11.0
|
LarryAIDraw/lenaeightysix-21000
|
LarryAIDraw
| null | 3 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 268 |
my second hypernetwork.i think soso but having some effect.
masterpiece,best quality,art by lenaeightysix,1girl,ahoge,very long hair,silver hair, long sleeves,hair between eyes, bangs,medium breasts, buttons,belt,thighhighs,military uniform,pantyhose,looking at viewer
|
mrm8488/santacoder-finetuned-the-stack-rust
|
mrm8488
|
gpt2
| 19 | 3 |
transformers
| 0 |
text-generation
| true | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,260 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-rust
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2075 | 0.05 | 500 | 1.0610 |
| 1.79 | 0.1 | 1000 | 1.0754 |
| 1.2441 | 0.15 | 1500 | 1.0339 |
| 1.1709 | 0.2 | 2000 | 0.9829 |
| 0.7645 | 0.25 | 2500 | 0.9738 |
| 1.0381 | 0.3 | 3000 | 0.9536 |
| 1.0625 | 0.35 | 3500 | 0.9268 |
| 0.78 | 0.4 | 4000 | 0.9130 |
| 0.9294 | 0.45 | 4500 | 0.9001 |
| 0.9767 | 0.5 | 5000 | 0.8857 |
| 5.7027 | 0.55 | 5500 | 0.8728 |
| 0.9476 | 0.6 | 6000 | 0.8556 |
| 0.6185 | 0.65 | 6500 | 0.8404 |
| 0.5057 | 0.7 | 7000 | 0.8328 |
| 0.6451 | 0.75 | 7500 | 0.8199 |
| 0.8298 | 0.8 | 8000 | 0.8111 |
| 0.2447 | 0.85 | 8500 | 0.8069 |
| 0.8177 | 0.9 | 9000 | 0.8020 |
| 0.7184 | 0.95 | 9500 | 0.8003 |
| 0.9166 | 1.0 | 10000 | 0.7999 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
96harsh56/berta-finetuned-subjqa
|
96harsh56
|
bert
| 12 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 920 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berta-finetuned-subjqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fermaat/a2c-AntBulletEnv-v0
|
fermaat
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mjaydenkim/autotrain-ma-detection-test-3372892714
|
mjaydenkim
|
bert
| 8 | 10 |
transformers
| 0 |
text-classification
| true | false | false | null |
['unk']
|
['mjaydenkim/autotrain-data-ma-detection-test']
|
{'emissions': 1.2555854454965398}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'text-classification']
| false | true | true | 964 |
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3372892714
- CO2 Emissions (in grams): 1.2556
## Validation Metrics
- Loss: 0.153
- Accuracy: 0.941
- Precision: 0.892
- Recall: 0.966
- AUC: 0.988
- F1: 0.928
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mjaydenkim/autotrain-ma-detection-test-3372892714
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mjaydenkim/autotrain-ma-detection-test-3372892714", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mjaydenkim/autotrain-ma-detection-test-3372892714", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
jamesdolezal/CTransPath
|
jamesdolezal
| null | 3 | 0 | null | 0 | null | false | false | false |
gpl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 764 |
[UNOFFICIAL]
This is the pretrained CTransPath model that accompanies the manuscript Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification, published by Xiyue Wang *et al* in Medical Image Analysis (October 2022, DOI: https://doi.org/10.1016/j.media.2022.102559)
This model has been uploaded to HuggingFace for easier sharing, but has not been verified by the original authors and is in no way affiliated with the original authors.
The official pretrained model is available on the official GitHub repository (https://github.com/Xiyue-Wang/TransPath) and Google Drive (https://drive.google.com/file/d/1DoDx_70_TLj98gTf6YTXnu4tFhsFocDX/view?usp=sharing). The license as included in the original repository is GPL-3.0.
|
albertqueralto/a2c-AntBulletEnv-v0
|
albertqueralto
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sgoodfriend/dqn-QbertNoFrameskip-v4
|
sgoodfriend
| null | 65 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['QbertNoFrameskip-v4', 'dqn', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,027 |
# **DQN** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **QbertNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/099h4lvj.
## Training Results
This model was trained from 3 trainings of **DQN** agents using different initial seeds. These agents were trained by checking out [1d4094f](https://github.com/sgoodfriend/rl-algo-impls/tree/1d4094fbcc9082de7f53f4348dd4c7c354152907). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:--------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| dqn | QbertNoFrameskip-v4 | 1 | 16065.6 | 205.752 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/tgo242om) |
| dqn | QbertNoFrameskip-v4 | 2 | 15668.8 | 179.3 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/i26qs27v) |
| dqn | QbertNoFrameskip-v4 | 3 | 15200 | 0 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/kpeu6yxt) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[1d4094f](https://github.com/sgoodfriend/rl-algo-impls/tree/1d4094fbcc9082de7f53f4348dd4c7c354152907).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/tgo242om
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [1d4094f](https://github.com/sgoodfriend/rl-algo-impls/tree/1d4094fbcc9082de7f53f4348dd4c7c354152907). While
training is deterministic, different hardware will give different results.
```
python train.py --algo dqn --env QbertNoFrameskip-v4 --seed 1
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/099h4lvj were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/dqn.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: dqn
algo_hyperparams:
batch_size: 32
buffer_size: 100000
exploration_final_eps: 0.01
exploration_fraction: 0.1
gradient_steps: 2
learning_rate: 0.0001
learning_starts: 100000
target_update_interval: 1000
train_freq: 8
env: QbertNoFrameskip-v4
env_hyperparams:
frame_stack: 4
n_envs: 8
no_reward_timeout_steps: 1000
vec_env_class: subproc
eval_params:
deterministic: false
n_timesteps: 10000000
seed: 1
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_1d4094f
- host_192-9-147-166
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.