modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Culmenus/opus-mt-de-is-finetuned-de-to-is | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | This is `microsoft/layoutxlm-base` fine-tuned on XFUND, French for 1000 steps.
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- adapter-transformers
- roberta
datasets:
- glue
language:
- en
---
# Adapter `SALT-NLP/pfadapter-roberta-base-qqp-combined-value` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("SALT-NLP/pfadapter-roberta-base-qqp-combined-value", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-19T09:45:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620356147237869
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1369
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.26 | 1.0 | 525 | 0.1680 | 0.8168 |
| 0.126 | 2.0 | 1050 | 0.1389 | 0.8464 |
| 0.0801 | 3.0 | 1575 | 0.1369 | 0.8620 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CuongLD/wav2vec2-large-xlsr-vietnamese | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice, infore_25h",
"arxiv:2006.11477",
"arxiv:2006.13979",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
---
### rilakkuma on Stable Diffusion
This is the `<rilakkuma>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
CyberMuffin/DialoGPT-small-ChandlerBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- invoices
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: invoices
type: invoices
config: sroie
split: train
args: sroie
metrics:
- name: Precision
type: precision
value: 0.975
- name: Recall
type: recall
value: 0.975
- name: F1
type: f1
value: 0.975
- name: Accuracy
type: accuracy
value: 0.975
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the invoices dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2299
- Precision: 0.975
- Recall: 0.975
- F1: 0.975
- Accuracy: 0.975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:-----:|:--------:|
| No log | 14.29 | 100 | 0.1616 | 0.975 | 0.975 | 0.975 | 0.975 |
| No log | 28.57 | 200 | 0.1909 | 0.975 | 0.975 | 0.975 | 0.975 |
| No log | 42.86 | 300 | 0.2046 | 0.975 | 0.975 | 0.975 | 0.975 |
| No log | 57.14 | 400 | 0.2134 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.1239 | 71.43 | 500 | 0.2299 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.1239 | 85.71 | 600 | 0.2309 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.1239 | 100.0 | 700 | 0.2342 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.1239 | 114.29 | 800 | 0.2407 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.1239 | 128.57 | 900 | 0.2428 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0007 | 142.86 | 1000 | 0.2449 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0007 | 157.14 | 1100 | 0.2465 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0007 | 171.43 | 1200 | 0.2488 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0007 | 185.71 | 1300 | 0.2515 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0007 | 200.0 | 1400 | 0.2525 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0004 | 214.29 | 1500 | 0.2540 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0004 | 228.57 | 1600 | 0.2557 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0004 | 242.86 | 1700 | 0.2564 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0004 | 257.14 | 1800 | 0.2570 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0004 | 271.43 | 1900 | 0.2573 | 0.975 | 0.975 | 0.975 | 0.975 |
| 0.0003 | 285.71 | 2000 | 0.2574 | 0.975 | 0.975 | 0.975 | 0.975 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cyrell/Cyrell | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- bc2gm_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-BC2GM-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc2gm_corpus
type: bc2gm_corpus
config: bc2gm_corpus
split: train
args: bc2gm_corpus
metrics:
- name: Precision
type: precision
value: 0.7652071701439906
- name: Recall
type: recall
value: 0.823399209486166
- name: F1
type: f1
value: 0.7932373771989948
- name: Accuracy
type: accuracy
value: 0.9756735092182762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-BC2GM-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the bc2gm_corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0720
- Precision: 0.7652
- Recall: 0.8234
- F1: 0.7932
- Accuracy: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.085 | 1.0 | 782 | 0.1112 | 0.6147 | 0.7777 | 0.6867 | 0.9634 |
| 0.0901 | 2.0 | 1564 | 0.0825 | 0.7141 | 0.8028 | 0.7559 | 0.9720 |
| 0.0303 | 3.0 | 2346 | 0.0759 | 0.7310 | 0.8049 | 0.7662 | 0.9724 |
| 0.0037 | 4.0 | 3128 | 0.0735 | 0.7430 | 0.8168 | 0.7781 | 0.9735 |
| 0.0325 | 5.0 | 3910 | 0.0723 | 0.7571 | 0.8142 | 0.7846 | 0.9748 |
| 0.0582 | 6.0 | 4692 | 0.0701 | 0.7664 | 0.8144 | 0.7897 | 0.9759 |
| 0.0073 | 7.0 | 5474 | 0.0701 | 0.7711 | 0.8212 | 0.7953 | 0.9761 |
| 0.1031 | 8.0 | 6256 | 0.0712 | 0.7602 | 0.8258 | 0.7916 | 0.9756 |
| 0.0248 | 9.0 | 7038 | 0.0722 | 0.7691 | 0.8231 | 0.7952 | 0.9759 |
| 0.0136 | 10.0 | 7820 | 0.0720 | 0.7652 | 0.8234 | 0.7932 | 0.9757 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
D3vil/DialoGPT-smaall-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### indiana on Stable Diffusion
This is the `<indiana>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
D3vil/DialoGPT-smaall-harrypottery | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-19T10:39:30Z | ---
tags:
- generated_from_trainer
datasets:
- ade_drug_effect_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-ADE-DRUG-EFFECT-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ade_drug_effect_ner
type: ade_drug_effect_ner
config: ade
split: train
args: ade
metrics:
- name: Precision
type: precision
value: 0.7745054945054946
- name: Recall
type: recall
value: 0.6555059523809523
- name: F1
type: f1
value: 0.7100544025790851
- name: Accuracy
type: accuracy
value: 0.9310355073540336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-ADE-DRUG-EFFECT-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the ade_drug_effect_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.7745
- Recall: 0.6555
- F1: 0.7101
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4498 | 1.0 | 336 | 0.3042 | 0.5423 | 0.6295 | 0.5826 | 0.9114 |
| 0.2572 | 2.0 | 672 | 0.2146 | 0.7596 | 0.6194 | 0.6824 | 0.9276 |
| 0.1542 | 3.0 | 1008 | 0.1894 | 0.7806 | 0.6168 | 0.6891 | 0.9299 |
| 0.1525 | 4.0 | 1344 | 0.1771 | 0.7832 | 0.625 | 0.6952 | 0.9309 |
| 0.1871 | 5.0 | 1680 | 0.1723 | 0.7271 | 0.6920 | 0.7091 | 0.9304 |
| 0.1425 | 6.0 | 2016 | 0.1683 | 0.7300 | 0.6979 | 0.7136 | 0.9297 |
| 0.1638 | 7.0 | 2352 | 0.1654 | 0.7432 | 0.6771 | 0.7086 | 0.9306 |
| 0.1592 | 8.0 | 2688 | 0.1635 | 0.7613 | 0.6585 | 0.7062 | 0.9305 |
| 0.1882 | 9.0 | 3024 | 0.1625 | 0.7858 | 0.6373 | 0.7038 | 0.9309 |
| 0.1339 | 10.0 | 3360 | 0.1630 | 0.7745 | 0.6555 | 0.7101 | 0.9310 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
D3xter1922/distilbert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-fine-sentiment-hineng-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-fine-sentiment-hineng-concat
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1126
- Accuracy: 0.8669
- Precision: 0.8667
- Recall: 0.8669
- F1: 0.8668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5814 | 1.0 | 4293 | 0.6920 | 0.8249 | 0.8304 | 0.8249 | 0.8257 |
| 0.5169 | 2.0 | 8586 | 0.5919 | 0.8459 | 0.8499 | 0.8459 | 0.8465 |
| 0.4274 | 3.0 | 12879 | 0.7775 | 0.8512 | 0.8513 | 0.8512 | 0.8504 |
| 0.3246 | 4.0 | 17172 | 0.7757 | 0.8522 | 0.8593 | 0.8522 | 0.8528 |
| 0.22 | 5.0 | 21465 | 0.9306 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
| 0.1226 | 6.0 | 25758 | 0.9663 | 0.8627 | 0.8632 | 0.8627 | 0.8628 |
| 0.085 | 7.0 | 30051 | 1.0266 | 0.8653 | 0.8651 | 0.8653 | 0.8651 |
| 0.0713 | 8.0 | 34344 | 1.1126 | 0.8669 | 0.8667 | 0.8669 | 0.8668 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DARKVIP3R/DialoGPT-medium-Anakin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- generated_from_trainer
datasets:
- ade_drug_dosage_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-ADE-DRUG-DOSAGE-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ade_drug_dosage_ner
type: ade_drug_dosage_ner
config: ade
split: train
args: ade
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.8697318007662835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-ADE-DRUG-DOSAGE-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the ade_drug_dosage_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6064
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.4165 | 1.0 | 14 | 1.3965 | 0.0255 | 0.0636 | 0.0365 | 0.7471 |
| 1.2063 | 2.0 | 28 | 1.1702 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.9527 | 3.0 | 42 | 0.9342 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.8238 | 4.0 | 56 | 0.7775 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.7452 | 5.0 | 70 | 0.6945 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.6386 | 6.0 | 84 | 0.6519 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.6742 | 7.0 | 98 | 0.6294 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.6669 | 8.0 | 112 | 0.6162 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.6595 | 9.0 | 126 | 0.6090 | 0.0 | 0.0 | 0.0 | 0.8697 |
| 0.6122 | 10.0 | 140 | 0.6064 | 0.0 | 0.0 | 0.0 | 0.8697 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DCU-NLP/bert-base-irish-cased-v1 | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,244 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- nielsr/XFUN
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
inference: false
---
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the [XFUND](https://github.com/doc-analysis/XFUND) dataset (French split).
## Model usage
Note that this model requires Tesseract, French package, in order to perform inference. You can install it using `!sudo apt-get install tesseract-ocr-fra`.
Here's how to use this model:
```
from transformers import AutoProcessor, AutoModelForTokenClassification
import torch
from PIL import Image
processor = AutoProcessor.from_pretrained("nielsr/layoutxlm-finetuned-xfund-fr")
model = AutoModelForTokenClassification.from_pretrained(nielsr/layoutxlm-finetuned-xfund-fr")
# assuming you have a French document, turned into an image
image = Image.open("...").convert("RGB")
# prepare for the model
encoding = processor(image, padding="max_length", max_length=512, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**encoding)
logits = outputs.logits
predictions = logits.argmax(-1)
```
## Intended uses & limitations
This model can be used for NER on French scanned documents. It can recognize 4 categories: "question", "answer", "header" and "other".
## Training and evaluation data
This checkpoint used the French portion of the multilingual [XFUND](https://github.com/doc-analysis/XFUND) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DCU-NLP/electra-base-irish-cased-discriminator-v1 | [
"pytorch",
"electra",
"pretraining",
"ga",
"transformers",
"irish",
"license:apache-2.0"
] | null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- nielsr/XFUN
metrics:
- precision
- recall
- f1
model-index:
- name: layoutxlm-finetuned-xfund-fr-re
results: []
inference: false
---
# layoutxlm-finetuned-xfund-fr-re
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the [XFUND](https://github.com/doc-analysis/XFUND) dataset (French split).
It achieves the following results on the evaluation set:
- Precision: 0.4533
- Recall: 0.7475
- F1: 0.5644
- Loss: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
This checkpoint used the French portion of the multilingual [XFUND](https://github.com/doc-analysis/XFUND) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DJSammy/bert-base-danish-uncased_BotXO-ai | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5215162259225145
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8243
- Matthews Correlation: 0.5215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5249 | 1.0 | 535 | 0.5275 | 0.4268 |
| 0.3462 | 2.0 | 1070 | 0.4858 | 0.5032 |
| 0.2344 | 3.0 | 1605 | 0.5823 | 0.5268 |
| 0.1803 | 4.0 | 2140 | 0.7752 | 0.5189 |
| 0.1346 | 5.0 | 2675 | 0.8243 | 0.5215 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DSI/TweetBasedSA | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8853910477127398
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8396
- Accuracy: 0.8854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4984 | 1.0 | 720 | 0.6402 | 0.8495 |
| 0.4376 | 2.0 | 1440 | 0.5394 | 0.8731 |
| 0.2318 | 3.0 | 2160 | 0.5903 | 0.8760 |
| 0.1414 | 4.0 | 2880 | 0.6221 | 0.8805 |
| 0.087 | 5.0 | 3600 | 0.7072 | 0.8819 |
| 0.0622 | 6.0 | 4320 | 0.7121 | 0.8819 |
| 0.036 | 7.0 | 5040 | 0.7750 | 0.8805 |
| 0.0234 | 8.0 | 5760 | 0.7767 | 0.8834 |
| 0.0157 | 9.0 | 6480 | 0.8243 | 0.8805 |
| 0.0122 | 10.0 | 7200 | 0.8198 | 0.8839 |
| 0.0092 | 11.0 | 7920 | 0.8105 | 0.8849 |
| 0.0047 | 12.0 | 8640 | 0.8561 | 0.8844 |
| 0.0038 | 13.0 | 9360 | 0.8367 | 0.8815 |
| 0.0029 | 14.0 | 10080 | 0.8396 | 0.8854 |
| 0.0014 | 15.0 | 10800 | 0.8410 | 0.8849 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DSI/human-directed-sentiment | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
tags:
- conversational
---
# Model for a Stan Pines discord chatbot
# There are unknown errors and I am researching why. |
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Tweets",
"Sentiment analysis"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-09-19T11:52:10Z | ---
tags:
- generated_from_trainer
language:
- sv
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_swedish-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: sv
split: train
args: sv
metrics:
- name: Precision
type: precision
value: 0.9340386115444618
- name: Recall
type: recall
value: 0.9418907624993855
- name: F1
type: f1
value: 0.9379482534942355
- name: Accuracy
type: accuracy
value: 0.979997105690534
widget:
- "Jag heter Peter Petersson och jag jobbar på Skatteverket. Jag bor i Uppsala."
inference:
parameters:
aggregation_strategy: "first"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_swedish-ner
This model is a fine-tuned version of [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1316
- Precision: 0.9340
- Recall: 0.9419
- F1: 0.9379
- Accuracy: 0.9800
## Model description
Finetuned the model from [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) for Swedish NER task. The model can classify three categories:
- PER (person names)
- LOC (Location)
- ORG (Organization)
## Intended uses & limitations
NER, token classification
## Training and evaluation data
wikiann-SV dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DTAI-KULeuven/robbertje-1-gb-shuffled | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-cased-emotion
results:
- task:
type: text-classification
name: text-classification
dataset:
name: emotion
type: emotion
config: default
split: validation
metrics:
- type: accuracy
value: 0.9235
name: accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTI0NGY1MTgwMzViYzY2NzdmOTk2NGM0YjRjMDExYjRkNDY3YmQzZmY4Y2JlMGQ3ZThlNjNjMjQ1OTI4MjdlOSIsInZlcnNpb24iOjF9.jJ8RC2HTtWrq3_b592HIFBlKuGavgSMrp--gwLFLZ66Uuz44gnouotoYpsqMtgUt-S3lkcCUs2ZFOPo6xR2EAA
- type: accuracy
value: 0.938
name: Accuracy
verified: true
- type: precision
value: 0.9281100797474869
name: Precision Macro
verified: true
- type: precision
value: 0.938
name: Precision Micro
verified: true
- type: precision
value: 0.9376891512759605
name: Precision Weighted
verified: true
- type: recall
value: 0.9029821552608664
name: Recall Macro
verified: true
- type: recall
value: 0.938
name: Recall Micro
verified: true
- type: recall
value: 0.938
name: Recall Weighted
verified: true
- type: f1
value: 0.9147207975135915
name: F1 Macro
verified: true
- type: f1
value: 0.938
name: F1 Micro
verified: true
- type: f1
value: 0.9373403463117288
name: F1 Weighted
verified: true
- type: loss
value: 0.23682540655136108
name: loss
verified: true
- type: accuracy
value: 0.938
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTUwM2MxN2Q0MGY2MjM5N2IyNjM1Y2Q5ZTM3YzRjYzU1NWQ3MGM1M2E2OWY5YTYyOGVjNTRmODc2ZDkxYzRlMyIsInZlcnNpb24iOjF9.vUgzbI8tmXYBL4OmW3y9GJx_J_j_o6hs1fmxgc-SgaF7B6Qr0qxVwTahvxKnJjad8KnA-aEfQWRYhwVpAr44Dw
- type: precision
value: 0.9281100797474869
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ1Mzg5NmI2MjU0MjA1N2EzNmQ1MGVhYjgyYTg5YmVjMGY5ZDhkMmJmNmVhZDI0ZDNmZDJhZjg0NDUzNzEzMiIsInZlcnNpb24iOjF9.kK_Lg1Vy1dXCSzEik-0vfP15icUf1RNYNs_dcXc46PAXU1TWF1t4CKUXmu6FFyRyAWKMp90dH_Ss7VIMj1-OAw
- type: precision
value: 0.938
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTdiNWE1ZmFmYmE3NDc0YmUyYjRiNGZmMDk5NTg5YWUyOTQyYmNkOWM3M2Y4ZjI4ZWU3Mzk2NTU1YjQxM2QzMiIsInZlcnNpb24iOjF9.9UVnzkkjthoZML84FS8cWbr88JPgG2RQtC7k_I4a3rCC0T6LMsJSoMWCOIz6a6tnpO26q-_x-wM70GzzSanmCg
- type: precision
value: 0.9376891512759605
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTU2ZDVjYjk5YTMwYWY2YmZjNDdkMTFkMWI5NjI3YjQ3Njk2NWQ0OGM2ZjYyZWRhZjE2YzM5YjA3NjU1YmFlNiIsInZlcnNpb24iOjF9.fixCk6zcIOYsk3LgfCHeSQ5YNoJ-ZWxvvW76g1-s2yjrI5Qmf_VQcNlyDr_pmHtZ7mc3yykCPaD6k2oMKZ5QBw
- type: recall
value: 0.9029821552608664
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjA0NTc4MGExMmExNWZjNjc1MDA3MDI4MmMwMTNjYWQ0YmVkZjExNzdkZTJkNmMwZWIyNjA4MTNlMDRmODkwNiIsInZlcnNpb24iOjF9.rAq4wFgCo6Q3mxElGdnbXlzVyUvSYOFW-m1KHzDFTydt-4Re-SUVjeh2y8HFf5H0Te5Jj5SiXfszJUmwFXh1Ag
- type: recall
value: 0.938
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDk4NTE4YjZmZDdlNmI1NDkxYmU4ZjAzODhiNGFhYTAxNGEyNjY2MGE3MWQwNGFkMDhkZjNkYjkzNWQxZDQ1YiIsInZlcnNpb24iOjF9.l2JP6XY0XWRFB6G5A82CQy9-Isn48vAGickkwvhMhcW7cZsPjDTLHg2wyBjD8etcYzU99RdfR9nusSnWxBvxAQ
- type: recall
value: 0.938
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTEwNGQ0NDc0NWE2OGUzMjkwMWM2MGRhODViNmIwMDU1NDk3OTdlOWI1N2VlN2JmY2MzYWI4YjJhNDFhYWVhNCIsInZlcnNpb24iOjF9.IV35Ctc1lhxXGP5J3PQxWi6IpWH1ZyZ95yrrca6-EzlF0w3AYL1Bk8q_glolpmrBqUBrJH8AJ07MFN3g77UzCA
- type: f1
value: 0.9147207975135915
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDFlNDk2OWUxOWQ5MTY1MTg3MDU2MzA5OGVmYTk2ODgyNDRiNWU5ZDZmYWYzYmMyMTcyZGExMWE1MGQ2Mzk2MyIsInZlcnNpb24iOjF9.GdxhM7wmt4DQucMB21z0ZZx-iOSlf7wYIs01U6dERhXHX4gsDaFIuIFCVNPFZpwerUYUz6Xtsl6jK1v9kkjqBA
- type: f1
value: 0.938
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTFlM2JlYWFhZjQzNGVjNjhhMTQ3MWZlNGFlMDI5YjEyYWQzMmY3MzQ4MmEzYzg0ZjIyZTZlODNmMTRlZWI5OSIsInZlcnNpb24iOjF9.EKJExY6JPzTwnnPzBPTCoDTowYYly6jKw4Qypsj3GKEpzwmqG-hoo5yhyZaoWsRL2hb1W6eHbsZtPdy1HqGLDQ
- type: f1
value: 0.9373403463117288
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzU1MjI3NTc4N2MxYTI5MjI1OTc3YjFhYjdkZDZmY2MxOTkyYzMzMWIxY2JmNGRkNDg2MTk0YTRiOWJhMTBhNCIsInZlcnNpb24iOjF9.bqFLqztywdiWTm-r7oeQ_R3_VKOTBkyxbL_sZktEE3hsJHYhwLOEVeKD7sqt56gQu_JNYMCz6WGWKRECFKH3Cw
- type: loss
value: 0.23682540655136108
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWZlZjkwNmJjYjAzYzk4ZjU4NzNiYzg1NTU1MGQ3OTFjYmE4MjE5NzNkYzZiZDI0ZmNkM2Y2NGU4ZjJlOThlOSIsInZlcnNpb24iOjF9.mYcz4sdJOmXXMBGW01OSJiKCouKh28AuExj1E7JiPCz-9ri3oRXPkT2fBHgrf0I2ifVEGVSFYckoC8Ymg-cMAw
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.9235
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDdiOWE5ZjI4NmVjN2Y0OWFmNGI2NWU4MTYwMTA5MTFmMjEzOTkyODhhOTEzMzg0OTJmOTM2NGM3YzMxMDAyNCIsInZlcnNpb24iOjF9.SzWJe5bMitRlUb4d0gIX49k_mTC1ADWKDOLdS6TMx3ZGmkMdD6F9-IlmmTunbfVEDQSjweVpv7H7TUcHsBeIBQ
- type: precision
value: 0.89608475565062
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjZlMjk2ZTZkNzJiZTgwNDczNzRjMDkzYTcwNmM2YjdjODE1NzI2NTFmNjRkYTBjMDg2NzMzNmE0MGY1NWRiYyIsInZlcnNpb24iOjF9.qlIfR61-QTw9M71mMGOjN2UPs3Pjz-DVtq094BFtVkFQVllN5vPZLkkqVTkQlX1WdnY70JHpYv-qM_Kpv1h3Cw
- type: precision
value: 0.9235
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjUwMDIwMGEwMmE2NmEzMjQxMjM5MWI5MDRlYjhmODE1NjliMjkwMTU5YTg3ZWJiMjI2MGQxMWU4MmJiYTljNiIsInZlcnNpb24iOjF9.rFJQ1Q6aPlncwReXR2jciz3UEIArJS_nTwHAuTNXZKKQSvTM1zBN7x7EdSDeEIf8JDFfEimvv282wgnCPVmkAw
- type: precision
value: 0.9224273416855945
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNmYWE4MTQxY2FkNTcxMzMyMmQyNmViN2FhMGQxOWYzNGU0MmIyMjhiZmE5ZmYyOWQwMzExZWRiMjRkNzliMyIsInZlcnNpb24iOjF9.6kD-hmPbjJrpAHZY-tFULnufxw118PfMCwj8PBjDKTtSQ_MCTP7iU3KtNe5essy5vDpdBFuA9E9SbrtjW3vnBA
- type: recall
value: 0.8581097243584549
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjZjMGJlZmI2NGVmYWIxZjdlYzg1ZjAyNmYwOWQ0NGE2OWI5OGIzMzE5Zjg5Yzg2NTlhZjRkMzE0ZTZjOGUyMCIsInZlcnNpb24iOjF9._r5lQK7hSPCyM9Bz9H5AVactjXzpF3hKW-iB5dco_kmVMhdy_-N00aFQg8XMKa4sL9lmJs0svINUwL5q55vmCw
- type: recall
value: 0.9235
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTQyMmZiNDQ1ZDMyZGY5ZWVlOGM0OWMzODhiNWI1NGUzNDJmOWEyM2VjYjhkNGQ4Y2ZmYzMwYjc1NTU4YjY0ZCIsInZlcnNpb24iOjF9.N7SJCOKlRa_3sRFYqiuPZpXmikQ9xuA4uDsKldGVNG6Tg6dffT2fLE9kkTCOQmG3BQzoP-xxjbhjG1gDPe_KCw
- type: recall
value: 0.9235
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGZkZmU4OTRhNDc3MzdiMDQwYzg4NmI5MTNlOWNjODk1ZGNmODZmZjk2MmY2NmU1Y2QyNGExNTkyNDUyZWJhYyIsInZlcnNpb24iOjF9.SvIJpc0dCvtMeaRZ0yQinV3FKftMiHLbhvalrVDT0uLnQ-wGL3HOkj9i9lO53kLrunxbswPjhKmR4bY1P6LUCQ
- type: f1
value: 0.8746813002250796
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDcxMTE4YzIyOWM5YmU0YTI4NzliMjM5YzBkYTJjODY3NjdkOWMwNmJhOTJiY2Y3MDM4NDNkYjljOGMxN2I2YyIsInZlcnNpb24iOjF9.41XMqnkXPjeoKiUmOuMhitX-624vXLmz6QMeNsOx_yFDhkugYP5ox5xq9PhC3cQIxn0SD71AusaN7YOnfKOaDA
- type: f1
value: 0.9235
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjE0MDU4YjY4MzlkMjQyZmI3ODUwODUyYzIyMDFhNjIyZjcwZjNkZTI0NmFiYTA4YWJlMzdmNDMxZGY0MTVhZCIsInZlcnNpb24iOjF9.a1Nh5T-g74Ol-ElZhu2O7yg4tJrBRGCJYeVYUsZd7AkXPR_e0jyOwVx16J2t7gOVTp512HEK5u2wIpns8qWrAA
- type: f1
value: 0.9217456925724525
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2YyODE0NjIzOGI4OTExYTlhMTUxNzBkZWE5ZGIzZTNkOThjZTRiYmM5NDEzYjZlYTY2MzY1ZmVkY2IyZDA4MyIsInZlcnNpb24iOjF9.XEhuUaY3E5FwXunSiRNMW_Rnu9B176mu7wr8jXvISdPovno1ANhliK276GeLgS_suAjjpPQPTwMK-ntJ9PHuCA
- type: loss
value: 0.32714536786079407
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdhOTRiNjI5Y2FmYmQ0MGNmNjM5NWQzN2Y0ZWJhNmYyZDgwYzRhOWE0YmNiMjIyNjYyMWE2NTczMTA3NDM1YiIsInZlcnNpb24iOjF9.tmWAsH0fU7QUztbh3PfbBL1aSigZge0iQgYQIHsULZMCD9M98l8DXeLpZl-dehWruVY9Lkr4PJKTn9ba35vKCg
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-emotion
**Training:** The model has been trained using the script provided in the following repository https://github.com/MorenoLaQuatra/transformers-tasks-templates
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on [emotion](https://huggingface.co/datasets/emotion) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3272
- Accuracy: 0.9235
- F1: 0.9217
- Precision: 0.9224
- Recall: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2776 | 1.0 | 500 | 0.2954 | 0.9 | 0.8957 | 0.9031 | 0.9 |
| 0.1887 | 2.0 | 1000 | 0.1716 | 0.934 | 0.9344 | 0.9370 | 0.934 |
| 0.119 | 3.0 | 1500 | 0.1614 | 0.9345 | 0.9342 | 0.9377 | 0.9345 |
| 0.1001 | 4.0 | 2000 | 0.2018 | 0.936 | 0.9353 | 0.9359 | 0.936 |
| 0.0704 | 5.0 | 2500 | 0.1925 | 0.935 | 0.9349 | 0.9354 | 0.935 |
| 0.0471 | 6.0 | 3000 | 0.2369 | 0.938 | 0.9373 | 0.9377 | 0.938 |
| 0.0322 | 7.0 | 3500 | 0.2693 | 0.938 | 0.9382 | 0.9392 | 0.938 |
| 0.0137 | 8.0 | 4000 | 0.2926 | 0.937 | 0.9371 | 0.9372 | 0.937 |
| 0.0099 | 9.0 | 4500 | 0.2964 | 0.9365 | 0.9362 | 0.9362 | 0.9365 |
| 0.0114 | 10.0 | 5000 | 0.3044 | 0.935 | 0.9349 | 0.9350 | 0.935 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
alexandrainst/da-hatespeech-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 866 | null | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: roberta-base-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.9092158650855444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-stsb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4221
- Pearson: 0.9116
- Spearmanr: 0.9092
- Combined Score: 0.9104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.6552 | 1.39 | 500 | 0.5265 | 0.8925 | 0.8925 | 0.8925 |
| 0.3579 | 2.78 | 1000 | 0.4626 | 0.9022 | 0.8991 | 0.9007 |
| 0.2198 | 4.17 | 1500 | 0.4396 | 0.9054 | 0.9042 | 0.9048 |
| 0.1585 | 5.56 | 2000 | 0.4537 | 0.9069 | 0.9052 | 0.9060 |
| 0.1139 | 6.94 | 2500 | 0.4975 | 0.9091 | 0.9065 | 0.9078 |
| 0.0868 | 8.33 | 3000 | 0.4221 | 0.9116 | 0.9092 | 0.9104 |
| 0.073 | 9.72 | 3500 | 0.4311 | 0.9096 | 0.9077 | 0.9086 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.7.1
- Datasets 1.18.3
- Tokenizers 0.11.6
|
alexandrainst/da-hatespeech-detection-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,719 | 2022-09-20T03:14:21Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8657445077298617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3617
- Accuracy: 0.8657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.0993 | 0.02 | 500 | 1.0983 | 0.3321 |
| 1.099 | 0.04 | 1000 | 1.0932 | 0.4276 |
| 1.011 | 0.06 | 1500 | 0.8352 | 0.6732 |
| 0.7551 | 0.08 | 2000 | 0.6018 | 0.7615 |
| 0.6343 | 0.1 | 2500 | 0.5726 | 0.7813 |
| 0.5884 | 0.12 | 3000 | 0.5349 | 0.7926 |
| 0.5548 | 0.14 | 3500 | 0.4925 | 0.8078 |
| 0.5244 | 0.16 | 4000 | 0.4806 | 0.8161 |
| 0.5198 | 0.18 | 4500 | 0.4614 | 0.8257 |
| 0.5168 | 0.2 | 5000 | 0.4713 | 0.8177 |
| 0.5194 | 0.22 | 5500 | 0.4344 | 0.8323 |
| 0.485 | 0.24 | 6000 | 0.4527 | 0.8316 |
| 0.4909 | 0.26 | 6500 | 0.4377 | 0.8376 |
| 0.49 | 0.29 | 7000 | 0.4649 | 0.8266 |
| 0.4897 | 0.31 | 7500 | 0.4162 | 0.8413 |
| 0.4672 | 0.33 | 8000 | 0.4163 | 0.8425 |
| 0.4699 | 0.35 | 8500 | 0.4060 | 0.8451 |
| 0.4729 | 0.37 | 9000 | 0.4412 | 0.8387 |
| 0.4733 | 0.39 | 9500 | 0.4353 | 0.8401 |
| 0.4699 | 0.41 | 10000 | 0.4060 | 0.8476 |
| 0.4759 | 0.43 | 10500 | 0.4226 | 0.8358 |
| 0.461 | 0.45 | 11000 | 0.4220 | 0.8423 |
| 0.4608 | 0.47 | 11500 | 0.4404 | 0.8319 |
| 0.462 | 0.49 | 12000 | 0.4280 | 0.8455 |
| 0.4533 | 0.51 | 12500 | 0.4128 | 0.8468 |
| 0.4691 | 0.53 | 13000 | 0.4155 | 0.8437 |
| 0.4552 | 0.55 | 13500 | 0.4385 | 0.8348 |
| 0.4573 | 0.57 | 14000 | 0.4498 | 0.8424 |
| 0.4562 | 0.59 | 14500 | 0.4162 | 0.8442 |
| 0.4665 | 0.61 | 15000 | 0.4417 | 0.8432 |
| 0.4569 | 0.63 | 15500 | 0.4113 | 0.8492 |
| 0.4705 | 0.65 | 16000 | 0.4454 | 0.8399 |
| 0.4685 | 0.67 | 16500 | 0.4055 | 0.8451 |
| 0.4475 | 0.69 | 17000 | 0.4426 | 0.8383 |
| 0.4641 | 0.71 | 17500 | 0.4256 | 0.8471 |
| 0.4299 | 0.73 | 18000 | 0.4260 | 0.8478 |
| 0.4439 | 0.75 | 18500 | 0.4218 | 0.8454 |
| 0.4628 | 0.77 | 19000 | 0.4087 | 0.8479 |
| 0.4502 | 0.79 | 19500 | 0.4238 | 0.8450 |
| 0.4299 | 0.81 | 20000 | 0.4091 | 0.8485 |
| 0.4496 | 0.84 | 20500 | 0.4160 | 0.8439 |
| 0.4492 | 0.86 | 21000 | 0.4109 | 0.8469 |
| 0.432 | 0.88 | 21500 | 0.4499 | 0.8493 |
| 0.4343 | 0.9 | 22000 | 0.4136 | 0.8465 |
| 0.4445 | 0.92 | 22500 | 0.4095 | 0.8433 |
| 0.4378 | 0.94 | 23000 | 0.3999 | 0.8483 |
| 0.4367 | 0.96 | 23500 | 0.3962 | 0.8509 |
| 0.4428 | 0.98 | 24000 | 0.3958 | 0.8504 |
| 0.4356 | 1.0 | 24500 | 0.3998 | 0.8558 |
| 0.3715 | 1.02 | 25000 | 0.4016 | 0.8589 |
| 0.3649 | 1.04 | 25500 | 0.4368 | 0.8582 |
| 0.3565 | 1.06 | 26000 | 0.4084 | 0.8519 |
| 0.3626 | 1.08 | 26500 | 0.4302 | 0.8438 |
| 0.3535 | 1.1 | 27000 | 0.4206 | 0.8557 |
| 0.3684 | 1.12 | 27500 | 0.4117 | 0.8561 |
| 0.3649 | 1.14 | 28000 | 0.4300 | 0.8527 |
| 0.3791 | 1.16 | 28500 | 0.3916 | 0.8585 |
| 0.366 | 1.18 | 29000 | 0.4101 | 0.8592 |
| 0.3777 | 1.2 | 29500 | 0.3946 | 0.8561 |
| 0.3672 | 1.22 | 30000 | 0.4417 | 0.8530 |
| 0.3688 | 1.24 | 30500 | 0.4066 | 0.8523 |
| 0.3525 | 1.26 | 31000 | 0.4299 | 0.8581 |
| 0.3688 | 1.28 | 31500 | 0.3870 | 0.8553 |
| 0.3699 | 1.3 | 32000 | 0.3781 | 0.8627 |
| 0.3547 | 1.32 | 32500 | 0.4311 | 0.8526 |
| 0.3653 | 1.34 | 33000 | 0.4034 | 0.8603 |
| 0.3738 | 1.36 | 33500 | 0.4103 | 0.8554 |
| 0.3824 | 1.39 | 34000 | 0.3719 | 0.8618 |
| 0.3591 | 1.41 | 34500 | 0.4244 | 0.8615 |
| 0.3697 | 1.43 | 35000 | 0.4689 | 0.8451 |
| 0.3598 | 1.45 | 35500 | 0.4149 | 0.8532 |
| 0.3586 | 1.47 | 36000 | 0.4070 | 0.8591 |
| 0.3519 | 1.49 | 36500 | 0.4133 | 0.8545 |
| 0.3681 | 1.51 | 37000 | 0.3889 | 0.8601 |
| 0.3611 | 1.53 | 37500 | 0.3934 | 0.8591 |
| 0.3696 | 1.55 | 38000 | 0.4313 | 0.8552 |
| 0.3798 | 1.57 | 38500 | 0.3784 | 0.8602 |
| 0.3601 | 1.59 | 39000 | 0.3994 | 0.8600 |
| 0.3696 | 1.61 | 39500 | 0.4206 | 0.8577 |
| 0.368 | 1.63 | 40000 | 0.3903 | 0.8627 |
| 0.3473 | 1.65 | 40500 | 0.3813 | 0.8655 |
| 0.3604 | 1.67 | 41000 | 0.3930 | 0.8551 |
| 0.3741 | 1.69 | 41500 | 0.3644 | 0.8618 |
| 0.3551 | 1.71 | 42000 | 0.3936 | 0.8583 |
| 0.378 | 1.73 | 42500 | 0.3826 | 0.8607 |
| 0.3609 | 1.75 | 43000 | 0.3815 | 0.8618 |
| 0.3678 | 1.77 | 43500 | 0.3961 | 0.8578 |
| 0.3633 | 1.79 | 44000 | 0.4011 | 0.8603 |
| 0.3792 | 1.81 | 44500 | 0.4061 | 0.8592 |
| 0.3675 | 1.83 | 45000 | 0.4155 | 0.8631 |
| 0.3576 | 1.85 | 45500 | 0.4061 | 0.8589 |
| 0.3546 | 1.87 | 46000 | 0.3862 | 0.8623 |
| 0.3564 | 1.89 | 46500 | 0.3937 | 0.8607 |
| 0.3602 | 1.91 | 47000 | 0.3851 | 0.8646 |
| 0.3494 | 1.94 | 47500 | 0.4015 | 0.8541 |
| 0.3499 | 1.96 | 48000 | 0.4266 | 0.8545 |
| 0.3672 | 1.98 | 48500 | 0.3761 | 0.8588 |
| 0.3661 | 2.0 | 49000 | 0.4121 | 0.8567 |
| 0.2759 | 2.02 | 49500 | 0.4653 | 0.8645 |
| 0.2927 | 2.04 | 50000 | 0.4652 | 0.8597 |
| 0.2736 | 2.06 | 50500 | 0.4547 | 0.8597 |
| 0.2749 | 2.08 | 51000 | 0.4896 | 0.8565 |
| 0.2757 | 2.1 | 51500 | 0.4814 | 0.8639 |
| 0.2833 | 2.12 | 52000 | 0.4110 | 0.8656 |
| 0.2797 | 2.14 | 52500 | 0.4316 | 0.8636 |
| 0.2643 | 2.16 | 53000 | 0.4317 | 0.8599 |
| 0.2791 | 2.18 | 53500 | 0.4557 | 0.8617 |
| 0.2737 | 2.2 | 54000 | 0.4102 | 0.8624 |
| 0.2748 | 2.22 | 54500 | 0.4187 | 0.8585 |
| 0.2619 | 2.24 | 55000 | 0.4412 | 0.8590 |
| 0.2718 | 2.26 | 55500 | 0.4707 | 0.8618 |
| 0.2662 | 2.28 | 56000 | 0.4754 | 0.8594 |
| 0.282 | 2.3 | 56500 | 0.4376 | 0.8617 |
| 0.284 | 2.32 | 57000 | 0.4393 | 0.8599 |
| 0.2733 | 2.34 | 57500 | 0.4531 | 0.8581 |
| 0.2878 | 2.36 | 58000 | 0.4727 | 0.8549 |
| 0.2812 | 2.38 | 58500 | 0.4221 | 0.8625 |
| 0.2657 | 2.4 | 59000 | 0.4456 | 0.8583 |
| 0.2716 | 2.42 | 59500 | 0.4455 | 0.8668 |
| 0.2766 | 2.44 | 60000 | 0.4940 | 0.8580 |
| 0.2871 | 2.46 | 60500 | 0.4460 | 0.8501 |
| 0.2731 | 2.49 | 61000 | 0.4600 | 0.8631 |
| 0.2885 | 2.51 | 61500 | 0.4229 | 0.8645 |
| 0.2764 | 2.53 | 62000 | 0.4107 | 0.8638 |
| 0.2866 | 2.55 | 62500 | 0.4250 | 0.8638 |
| 0.2754 | 2.57 | 63000 | 0.4846 | 0.8580 |
| 0.3028 | 2.59 | 63500 | 0.4339 | 0.8627 |
| 0.2828 | 2.61 | 64000 | 0.4697 | 0.8613 |
| 0.2875 | 2.63 | 64500 | 0.4167 | 0.8638 |
| 0.2836 | 2.65 | 65000 | 0.5050 | 0.8600 |
| 0.2978 | 2.67 | 65500 | 0.4139 | 0.8628 |
| 0.2946 | 2.69 | 66000 | 0.4449 | 0.8644 |
| 0.2822 | 2.71 | 66500 | 0.4302 | 0.8612 |
| 0.3006 | 2.73 | 67000 | 0.4256 | 0.8631 |
| 0.2896 | 2.75 | 67500 | 0.4993 | 0.8603 |
| 0.2787 | 2.77 | 68000 | 0.4467 | 0.8636 |
| 0.3 | 2.79 | 68500 | 0.4196 | 0.8592 |
| 0.2939 | 2.81 | 69000 | 0.4234 | 0.8614 |
| 0.2841 | 2.83 | 69500 | 0.4173 | 0.8660 |
| 0.2935 | 2.85 | 70000 | 0.4054 | 0.8658 |
| 0.2977 | 2.87 | 70500 | 0.4400 | 0.8623 |
| 0.2853 | 2.89 | 71000 | 0.4322 | 0.8668 |
| 0.2779 | 2.91 | 71500 | 0.4460 | 0.8595 |
| 0.2923 | 2.93 | 72000 | 0.4279 | 0.8619 |
| 0.2915 | 2.95 | 72500 | 0.4324 | 0.8625 |
| 0.2927 | 2.97 | 73000 | 0.4108 | 0.8672 |
| 0.29 | 2.99 | 73500 | 0.4299 | 0.8579 |
| 0.2255 | 3.01 | 74000 | 0.5337 | 0.8637 |
| 0.2113 | 3.04 | 74500 | 0.5046 | 0.8624 |
| 0.207 | 3.06 | 75000 | 0.6011 | 0.8551 |
| 0.2226 | 3.08 | 75500 | 0.5426 | 0.8579 |
| 0.2129 | 3.1 | 76000 | 0.5036 | 0.8640 |
| 0.2201 | 3.12 | 76500 | 0.5629 | 0.8604 |
| 0.2185 | 3.14 | 77000 | 0.5416 | 0.8607 |
| 0.21 | 3.16 | 77500 | 0.5457 | 0.8605 |
| 0.2372 | 3.18 | 78000 | 0.5337 | 0.8594 |
| 0.2237 | 3.2 | 78500 | 0.5060 | 0.8679 |
| 0.2277 | 3.22 | 79000 | 0.5647 | 0.8651 |
| 0.2301 | 3.24 | 79500 | 0.4906 | 0.8602 |
| 0.2238 | 3.26 | 80000 | 0.5231 | 0.8647 |
| 0.2365 | 3.28 | 80500 | 0.5628 | 0.8621 |
| 0.2189 | 3.3 | 81000 | 0.5496 | 0.8630 |
| 0.2233 | 3.32 | 81500 | 0.5418 | 0.8639 |
| 0.2216 | 3.34 | 82000 | 0.5032 | 0.8689 |
| 0.2314 | 3.36 | 82500 | 0.5437 | 0.8634 |
| 0.2351 | 3.38 | 83000 | 0.4863 | 0.8653 |
| 0.2378 | 3.4 | 83500 | 0.5158 | 0.8635 |
| 0.2357 | 3.42 | 84000 | 0.5142 | 0.8629 |
| 0.2484 | 3.44 | 84500 | 0.4536 | 0.8657 |
| 0.2261 | 3.46 | 85000 | 0.5619 | 0.8649 |
| 0.2323 | 3.48 | 85500 | 0.5371 | 0.8587 |
| 0.2336 | 3.5 | 86000 | 0.5562 | 0.8621 |
| 0.2259 | 3.52 | 86500 | 0.5339 | 0.8589 |
| 0.2371 | 3.54 | 87000 | 0.4711 | 0.8665 |
| 0.227 | 3.57 | 87500 | 0.5350 | 0.8644 |
| 0.2417 | 3.59 | 88000 | 0.4692 | 0.8665 |
| 0.2176 | 3.61 | 88500 | 0.5195 | 0.8655 |
| 0.2393 | 3.63 | 89000 | 0.5468 | 0.8588 |
| 0.2219 | 3.65 | 89500 | 0.5498 | 0.8646 |
| 0.23 | 3.67 | 90000 | 0.5367 | 0.8703 |
| 0.2317 | 3.69 | 90500 | 0.4761 | 0.8639 |
| 0.2241 | 3.71 | 91000 | 0.4992 | 0.8654 |
| 0.2327 | 3.73 | 91500 | 0.5040 | 0.8678 |
| 0.2312 | 3.75 | 92000 | 0.4943 | 0.8639 |
| 0.2369 | 3.77 | 92500 | 0.4824 | 0.8721 |
| 0.2235 | 3.79 | 93000 | 0.5090 | 0.8661 |
| 0.2256 | 3.81 | 93500 | 0.5258 | 0.8644 |
| 0.236 | 3.83 | 94000 | 0.5490 | 0.8542 |
| 0.2313 | 3.85 | 94500 | 0.4672 | 0.8677 |
| 0.228 | 3.87 | 95000 | 0.5037 | 0.8623 |
| 0.2297 | 3.89 | 95500 | 0.5207 | 0.8545 |
| 0.2332 | 3.91 | 96000 | 0.5139 | 0.8698 |
| 0.2331 | 3.93 | 96500 | 0.5182 | 0.8615 |
| 0.2354 | 3.95 | 97000 | 0.5090 | 0.8657 |
| 0.2273 | 3.97 | 97500 | 0.5523 | 0.8637 |
| 0.2433 | 3.99 | 98000 | 0.5148 | 0.8691 |
| 0.191 | 4.01 | 98500 | 0.6007 | 0.8654 |
| 0.1683 | 4.03 | 99000 | 0.6770 | 0.8636 |
| 0.1778 | 4.05 | 99500 | 0.6595 | 0.8635 |
| 0.1832 | 4.07 | 100000 | 0.6129 | 0.8608 |
| 0.1842 | 4.09 | 100500 | 0.6612 | 0.8611 |
| 0.1865 | 4.12 | 101000 | 0.6551 | 0.8658 |
| 0.1833 | 4.14 | 101500 | 0.6294 | 0.8643 |
| 0.1869 | 4.16 | 102000 | 0.6234 | 0.8614 |
| 0.1806 | 4.18 | 102500 | 0.6417 | 0.8655 |
| 0.1911 | 4.2 | 103000 | 0.6426 | 0.8607 |
| 0.1981 | 4.22 | 103500 | 0.6247 | 0.8589 |
| 0.1731 | 4.24 | 104000 | 0.6613 | 0.8626 |
| 0.1977 | 4.26 | 104500 | 0.5441 | 0.8661 |
| 0.1771 | 4.28 | 105000 | 0.6608 | 0.8644 |
| 0.1903 | 4.3 | 105500 | 0.6174 | 0.8603 |
| 0.1797 | 4.32 | 106000 | 0.6609 | 0.8607 |
| 0.188 | 4.34 | 106500 | 0.6059 | 0.8643 |
| 0.1863 | 4.36 | 107000 | 0.5723 | 0.8663 |
| 0.19 | 4.38 | 107500 | 0.5959 | 0.8652 |
| 0.1869 | 4.4 | 108000 | 0.5898 | 0.8698 |
| 0.1909 | 4.42 | 108500 | 0.6052 | 0.8659 |
| 0.1908 | 4.44 | 109000 | 0.5854 | 0.8690 |
| 0.203 | 4.46 | 109500 | 0.5727 | 0.8694 |
| 0.1993 | 4.48 | 110000 | 0.5877 | 0.8653 |
| 0.1796 | 4.5 | 110500 | 0.6231 | 0.8679 |
| 0.1837 | 4.52 | 111000 | 0.5749 | 0.8694 |
| 0.1885 | 4.54 | 111500 | 0.6174 | 0.8618 |
| 0.1902 | 4.56 | 112000 | 0.5625 | 0.8682 |
| 0.2031 | 4.58 | 112500 | 0.6252 | 0.8577 |
| 0.1986 | 4.6 | 113000 | 0.6147 | 0.8548 |
| 0.1769 | 4.62 | 113500 | 0.6351 | 0.8648 |
| 0.1974 | 4.64 | 114000 | 0.6396 | 0.8630 |
| 0.1952 | 4.67 | 114500 | 0.6174 | 0.8661 |
| 0.1904 | 4.69 | 115000 | 0.6188 | 0.8663 |
| 0.191 | 4.71 | 115500 | 0.5860 | 0.8646 |
| 0.1869 | 4.73 | 116000 | 0.5978 | 0.8586 |
| 0.2056 | 4.75 | 116500 | 0.5985 | 0.8648 |
| 0.1837 | 4.77 | 117000 | 0.5742 | 0.8636 |
| 0.2038 | 4.79 | 117500 | 0.5726 | 0.8662 |
| 0.1939 | 4.81 | 118000 | 0.6097 | 0.8623 |
| 0.1869 | 4.83 | 118500 | 0.5820 | 0.8651 |
| 0.1897 | 4.85 | 119000 | 0.5766 | 0.8666 |
| 0.1792 | 4.87 | 119500 | 0.6093 | 0.8683 |
| 0.2056 | 4.89 | 120000 | 0.5890 | 0.8633 |
| 0.1989 | 4.91 | 120500 | 0.5825 | 0.8674 |
| 0.1916 | 4.93 | 121000 | 0.6250 | 0.8641 |
| 0.197 | 4.95 | 121500 | 0.5848 | 0.8645 |
| 0.1923 | 4.97 | 122000 | 0.5666 | 0.8667 |
| 0.1916 | 4.99 | 122500 | 0.6189 | 0.8638 |
| 0.1642 | 5.01 | 123000 | 0.7094 | 0.8610 |
| 0.1357 | 5.03 | 123500 | 0.6972 | 0.8658 |
| 0.1476 | 5.05 | 124000 | 0.6965 | 0.8664 |
| 0.1476 | 5.07 | 124500 | 0.7177 | 0.8638 |
| 0.1486 | 5.09 | 125000 | 0.6945 | 0.8620 |
| 0.1309 | 5.11 | 125500 | 0.7326 | 0.8626 |
| 0.1575 | 5.13 | 126000 | 0.6473 | 0.8632 |
| 0.1411 | 5.15 | 126500 | 0.6955 | 0.8651 |
| 0.1473 | 5.17 | 127000 | 0.6926 | 0.8648 |
| 0.153 | 5.19 | 127500 | 0.7010 | 0.8638 |
| 0.1488 | 5.22 | 128000 | 0.6643 | 0.8689 |
| 0.144 | 5.24 | 128500 | 0.6868 | 0.8668 |
| 0.156 | 5.26 | 129000 | 0.6682 | 0.8645 |
| 0.1537 | 5.28 | 129500 | 0.6740 | 0.8610 |
| 0.1424 | 5.3 | 130000 | 0.7509 | 0.8603 |
| 0.1531 | 5.32 | 130500 | 0.6966 | 0.8670 |
| 0.1457 | 5.34 | 131000 | 0.7227 | 0.8632 |
| 0.1494 | 5.36 | 131500 | 0.6911 | 0.8626 |
| 0.1476 | 5.38 | 132000 | 0.6903 | 0.8630 |
| 0.1531 | 5.4 | 132500 | 0.6839 | 0.8675 |
| 0.1613 | 5.42 | 133000 | 0.6559 | 0.8601 |
| 0.1456 | 5.44 | 133500 | 0.7161 | 0.8619 |
| 0.1539 | 5.46 | 134000 | 0.7108 | 0.8638 |
| 0.1685 | 5.48 | 134500 | 0.6703 | 0.8628 |
| 0.1482 | 5.5 | 135000 | 0.6692 | 0.8651 |
| 0.1587 | 5.52 | 135500 | 0.6936 | 0.8658 |
| 0.152 | 5.54 | 136000 | 0.6844 | 0.8661 |
| 0.1619 | 5.56 | 136500 | 0.6632 | 0.8641 |
| 0.154 | 5.58 | 137000 | 0.6451 | 0.8666 |
| 0.1525 | 5.6 | 137500 | 0.6529 | 0.8686 |
| 0.1545 | 5.62 | 138000 | 0.6860 | 0.8603 |
| 0.1487 | 5.64 | 138500 | 0.6842 | 0.8668 |
| 0.1546 | 5.66 | 139000 | 0.6692 | 0.8655 |
| 0.168 | 5.68 | 139500 | 0.6701 | 0.8649 |
| 0.1513 | 5.7 | 140000 | 0.6613 | 0.8680 |
| 0.1704 | 5.72 | 140500 | 0.6804 | 0.8643 |
| 0.1517 | 5.74 | 141000 | 0.6871 | 0.8684 |
| 0.1572 | 5.77 | 141500 | 0.6676 | 0.8670 |
| 0.1551 | 5.79 | 142000 | 0.6919 | 0.8638 |
| 0.1483 | 5.81 | 142500 | 0.6801 | 0.8667 |
| 0.1562 | 5.83 | 143000 | 0.6791 | 0.8628 |
| 0.1594 | 5.85 | 143500 | 0.6422 | 0.8671 |
| 0.1627 | 5.87 | 144000 | 0.6526 | 0.8679 |
| 0.1514 | 5.89 | 144500 | 0.6734 | 0.8698 |
| 0.1546 | 5.91 | 145000 | 0.6377 | 0.8711 |
| 0.146 | 5.93 | 145500 | 0.7214 | 0.8657 |
| 0.1608 | 5.95 | 146000 | 0.6756 | 0.8674 |
| 0.1648 | 5.97 | 146500 | 0.6387 | 0.8687 |
| 0.1547 | 5.99 | 147000 | 0.6871 | 0.8646 |
| 0.1304 | 6.01 | 147500 | 0.7543 | 0.8633 |
| 0.1059 | 6.03 | 148000 | 0.7576 | 0.8638 |
| 0.1089 | 6.05 | 148500 | 0.7530 | 0.8642 |
| 0.112 | 6.07 | 149000 | 0.7951 | 0.8640 |
| 0.1198 | 6.09 | 149500 | 0.7381 | 0.8636 |
| 0.1222 | 6.11 | 150000 | 0.7560 | 0.8623 |
| 0.1024 | 6.13 | 150500 | 0.7965 | 0.8669 |
| 0.125 | 6.15 | 151000 | 0.7613 | 0.8620 |
| 0.1005 | 6.17 | 151500 | 0.7851 | 0.8651 |
| 0.1196 | 6.19 | 152000 | 0.7637 | 0.8652 |
| 0.1133 | 6.21 | 152500 | 0.7810 | 0.8660 |
| 0.1271 | 6.23 | 153000 | 0.7510 | 0.8672 |
| 0.1167 | 6.25 | 153500 | 0.7670 | 0.8638 |
| 0.1198 | 6.27 | 154000 | 0.7770 | 0.8632 |
| 0.1194 | 6.29 | 154500 | 0.7720 | 0.8607 |
| 0.1215 | 6.32 | 155000 | 0.7880 | 0.8609 |
| 0.1134 | 6.34 | 155500 | 0.8026 | 0.8617 |
| 0.1113 | 6.36 | 156000 | 0.7632 | 0.8652 |
| 0.1207 | 6.38 | 156500 | 0.7369 | 0.8686 |
| 0.1188 | 6.4 | 157000 | 0.7466 | 0.8657 |
| 0.1283 | 6.42 | 157500 | 0.7531 | 0.8645 |
| 0.1186 | 6.44 | 158000 | 0.7529 | 0.8673 |
| 0.135 | 6.46 | 158500 | 0.7706 | 0.8589 |
| 0.1116 | 6.48 | 159000 | 0.7754 | 0.8646 |
| 0.1295 | 6.5 | 159500 | 0.7026 | 0.8693 |
| 0.1309 | 6.52 | 160000 | 0.7342 | 0.8656 |
| 0.1172 | 6.54 | 160500 | 0.7828 | 0.8644 |
| 0.125 | 6.56 | 161000 | 0.7456 | 0.8671 |
| 0.1199 | 6.58 | 161500 | 0.7464 | 0.8701 |
| 0.1197 | 6.6 | 162000 | 0.7626 | 0.8639 |
| 0.1126 | 6.62 | 162500 | 0.8115 | 0.8609 |
| 0.1365 | 6.64 | 163000 | 0.7407 | 0.8681 |
| 0.122 | 6.66 | 163500 | 0.7648 | 0.8641 |
| 0.1157 | 6.68 | 164000 | 0.7636 | 0.8669 |
| 0.118 | 6.7 | 164500 | 0.7688 | 0.8686 |
| 0.1173 | 6.72 | 165000 | 0.8051 | 0.8687 |
| 0.1137 | 6.74 | 165500 | 0.8101 | 0.8635 |
| 0.1412 | 6.76 | 166000 | 0.7004 | 0.8689 |
| 0.1131 | 6.78 | 166500 | 0.7589 | 0.8664 |
| 0.1232 | 6.8 | 167000 | 0.7657 | 0.8654 |
| 0.1343 | 6.82 | 167500 | 0.7547 | 0.8652 |
| 0.1208 | 6.84 | 168000 | 0.7407 | 0.8699 |
| 0.1284 | 6.87 | 168500 | 0.7182 | 0.8677 |
| 0.1182 | 6.89 | 169000 | 0.7248 | 0.8681 |
| 0.1166 | 6.91 | 169500 | 0.7385 | 0.8678 |
| 0.1289 | 6.93 | 170000 | 0.7293 | 0.8672 |
| 0.1243 | 6.95 | 170500 | 0.7178 | 0.8696 |
| 0.1256 | 6.97 | 171000 | 0.7291 | 0.8633 |
| 0.1162 | 6.99 | 171500 | 0.7515 | 0.8648 |
| 0.1013 | 7.01 | 172000 | 0.7824 | 0.8655 |
| 0.0811 | 7.03 | 172500 | 0.8297 | 0.8647 |
| 0.0831 | 7.05 | 173000 | 0.8144 | 0.8678 |
| 0.0872 | 7.07 | 173500 | 0.8176 | 0.8679 |
| 0.0868 | 7.09 | 174000 | 0.8405 | 0.8642 |
| 0.0756 | 7.11 | 174500 | 0.8867 | 0.8642 |
| 0.0882 | 7.13 | 175000 | 0.8185 | 0.8659 |
| 0.0879 | 7.15 | 175500 | 0.8653 | 0.8625 |
| 0.0831 | 7.17 | 176000 | 0.8323 | 0.8655 |
| 0.0847 | 7.19 | 176500 | 0.8358 | 0.8650 |
| 0.0938 | 7.21 | 177000 | 0.7967 | 0.8665 |
| 0.0908 | 7.23 | 177500 | 0.8147 | 0.8640 |
| 0.0809 | 7.25 | 178000 | 0.8325 | 0.8679 |
| 0.0993 | 7.27 | 178500 | 0.8131 | 0.8655 |
| 0.087 | 7.29 | 179000 | 0.8249 | 0.8628 |
| 0.0873 | 7.31 | 179500 | 0.8326 | 0.8661 |
| 0.0889 | 7.33 | 180000 | 0.8171 | 0.8685 |
| 0.0739 | 7.35 | 180500 | 0.8686 | 0.8642 |
| 0.0821 | 7.37 | 181000 | 0.8739 | 0.8669 |
| 0.0981 | 7.39 | 181500 | 0.8558 | 0.8639 |
| 0.0858 | 7.42 | 182000 | 0.8276 | 0.8673 |
| 0.083 | 7.44 | 182500 | 0.8148 | 0.8675 |
| 0.0969 | 7.46 | 183000 | 0.8520 | 0.8630 |
| 0.0851 | 7.48 | 183500 | 0.8604 | 0.8671 |
| 0.0881 | 7.5 | 184000 | 0.8665 | 0.8634 |
| 0.1036 | 7.52 | 184500 | 0.8233 | 0.8642 |
| 0.0874 | 7.54 | 185000 | 0.8293 | 0.8660 |
| 0.0935 | 7.56 | 185500 | 0.8006 | 0.8671 |
| 0.0887 | 7.58 | 186000 | 0.8352 | 0.8637 |
| 0.0897 | 7.6 | 186500 | 0.8309 | 0.8655 |
| 0.0788 | 7.62 | 187000 | 0.8505 | 0.8653 |
| 0.0887 | 7.64 | 187500 | 0.8465 | 0.8657 |
| 0.0909 | 7.66 | 188000 | 0.8582 | 0.8637 |
| 0.0895 | 7.68 | 188500 | 0.8487 | 0.8659 |
| 0.0729 | 7.7 | 189000 | 0.8770 | 0.8636 |
| 0.0758 | 7.72 | 189500 | 0.8717 | 0.8653 |
| 0.0901 | 7.74 | 190000 | 0.8513 | 0.8639 |
| 0.0848 | 7.76 | 190500 | 0.8554 | 0.8661 |
| 0.0985 | 7.78 | 191000 | 0.8259 | 0.8640 |
| 0.091 | 7.8 | 191500 | 0.8483 | 0.8644 |
| 0.0868 | 7.82 | 192000 | 0.8776 | 0.8602 |
| 0.0898 | 7.84 | 192500 | 0.8470 | 0.8634 |
| 0.0959 | 7.86 | 193000 | 0.8344 | 0.8645 |
| 0.0939 | 7.88 | 193500 | 0.8419 | 0.8641 |
| 0.0769 | 7.9 | 194000 | 0.8355 | 0.8673 |
| 0.0808 | 7.92 | 194500 | 0.8642 | 0.8646 |
| 0.0797 | 7.94 | 195000 | 0.8401 | 0.8663 |
| 0.0875 | 7.97 | 195500 | 0.8598 | 0.8638 |
| 0.0896 | 7.99 | 196000 | 0.8624 | 0.8648 |
| 0.0762 | 8.01 | 196500 | 0.8645 | 0.8656 |
| 0.0552 | 8.03 | 197000 | 0.8844 | 0.8661 |
| 0.0598 | 8.05 | 197500 | 0.8870 | 0.8663 |
| 0.0528 | 8.07 | 198000 | 0.8866 | 0.8679 |
| 0.0679 | 8.09 | 198500 | 0.8835 | 0.8657 |
| 0.0628 | 8.11 | 199000 | 0.9017 | 0.8635 |
| 0.0644 | 8.13 | 199500 | 0.8979 | 0.8647 |
| 0.0446 | 8.15 | 200000 | 0.9144 | 0.8656 |
| 0.0524 | 8.17 | 200500 | 0.9116 | 0.8651 |
| 0.0561 | 8.19 | 201000 | 0.9281 | 0.8639 |
| 0.0525 | 8.21 | 201500 | 0.9115 | 0.8672 |
| 0.0646 | 8.23 | 202000 | 0.8933 | 0.8663 |
| 0.0691 | 8.25 | 202500 | 0.8591 | 0.8662 |
| 0.0708 | 8.27 | 203000 | 0.8525 | 0.8683 |
| 0.0598 | 8.29 | 203500 | 0.8663 | 0.8689 |
| 0.0513 | 8.31 | 204000 | 0.8671 | 0.8704 |
| 0.0564 | 8.33 | 204500 | 0.8597 | 0.8694 |
| 0.0619 | 8.35 | 205000 | 0.8645 | 0.8683 |
| 0.0563 | 8.37 | 205500 | 0.8848 | 0.8658 |
| 0.0615 | 8.39 | 206000 | 0.8728 | 0.8663 |
| 0.0668 | 8.41 | 206500 | 0.8925 | 0.8657 |
| 0.0592 | 8.43 | 207000 | 0.8644 | 0.8673 |
| 0.0668 | 8.45 | 207500 | 0.8601 | 0.8700 |
| 0.071 | 8.47 | 208000 | 0.8735 | 0.8682 |
| 0.061 | 8.49 | 208500 | 0.8797 | 0.8662 |
| 0.0627 | 8.52 | 209000 | 0.8742 | 0.8663 |
| 0.0505 | 8.54 | 209500 | 0.9063 | 0.8649 |
| 0.0607 | 8.56 | 210000 | 0.8940 | 0.8677 |
| 0.0569 | 8.58 | 210500 | 0.8953 | 0.8673 |
| 0.0671 | 8.6 | 211000 | 0.8784 | 0.8667 |
| 0.0509 | 8.62 | 211500 | 0.8942 | 0.8678 |
| 0.0526 | 8.64 | 212000 | 0.8968 | 0.8686 |
| 0.0541 | 8.66 | 212500 | 0.8950 | 0.8694 |
| 0.0677 | 8.68 | 213000 | 0.8808 | 0.8665 |
| 0.0552 | 8.7 | 213500 | 0.8923 | 0.8662 |
| 0.053 | 8.72 | 214000 | 0.9118 | 0.8673 |
| 0.0608 | 8.74 | 214500 | 0.9023 | 0.8700 |
| 0.0573 | 8.76 | 215000 | 0.9096 | 0.8681 |
| 0.0621 | 8.78 | 215500 | 0.8872 | 0.8684 |
| 0.0559 | 8.8 | 216000 | 0.8837 | 0.8672 |
| 0.0593 | 8.82 | 216500 | 0.8937 | 0.8675 |
| 0.0633 | 8.84 | 217000 | 0.8746 | 0.8685 |
| 0.0548 | 8.86 | 217500 | 0.9049 | 0.8662 |
| 0.0427 | 8.88 | 218000 | 0.9195 | 0.8685 |
| 0.0623 | 8.9 | 218500 | 0.9146 | 0.8669 |
| 0.0594 | 8.92 | 219000 | 0.9096 | 0.8672 |
| 0.0683 | 8.94 | 219500 | 0.8778 | 0.8679 |
| 0.0659 | 8.96 | 220000 | 0.8552 | 0.8699 |
| 0.0603 | 8.98 | 220500 | 0.8901 | 0.8679 |
| 0.0566 | 9.0 | 221000 | 0.8997 | 0.8677 |
| 0.0443 | 9.02 | 221500 | 0.9009 | 0.8683 |
| 0.0358 | 9.04 | 222000 | 0.9193 | 0.8680 |
| 0.0317 | 9.07 | 222500 | 0.9319 | 0.8687 |
| 0.0384 | 9.09 | 223000 | 0.9155 | 0.8699 |
| 0.0432 | 9.11 | 223500 | 0.9243 | 0.8685 |
| 0.0408 | 9.13 | 224000 | 0.9251 | 0.8693 |
| 0.0443 | 9.15 | 224500 | 0.9322 | 0.8677 |
| 0.0438 | 9.17 | 225000 | 0.9371 | 0.8666 |
| 0.0379 | 9.19 | 225500 | 0.9283 | 0.8693 |
| 0.0411 | 9.21 | 226000 | 0.9147 | 0.8703 |
| 0.036 | 9.23 | 226500 | 0.9167 | 0.8703 |
| 0.0394 | 9.25 | 227000 | 0.9254 | 0.8688 |
| 0.0363 | 9.27 | 227500 | 0.9288 | 0.8704 |
| 0.0492 | 9.29 | 228000 | 0.9242 | 0.8693 |
| 0.0411 | 9.31 | 228500 | 0.9325 | 0.8677 |
| 0.0408 | 9.33 | 229000 | 0.9370 | 0.8690 |
| 0.0326 | 9.35 | 229500 | 0.9417 | 0.8705 |
| 0.038 | 9.37 | 230000 | 0.9480 | 0.8700 |
| 0.0412 | 9.39 | 230500 | 0.9398 | 0.8693 |
| 0.0588 | 9.41 | 231000 | 0.9174 | 0.8707 |
| 0.0417 | 9.43 | 231500 | 0.9204 | 0.8715 |
| 0.0362 | 9.45 | 232000 | 0.9319 | 0.8701 |
| 0.0283 | 9.47 | 232500 | 0.9562 | 0.8696 |
| 0.0353 | 9.49 | 233000 | 0.9525 | 0.8690 |
| 0.0384 | 9.51 | 233500 | 0.9561 | 0.8687 |
| 0.0406 | 9.53 | 234000 | 0.9375 | 0.8715 |
| 0.0356 | 9.55 | 234500 | 0.9575 | 0.8690 |
| 0.044 | 9.57 | 235000 | 0.9429 | 0.8708 |
| 0.0444 | 9.6 | 235500 | 0.9413 | 0.8690 |
| 0.0421 | 9.62 | 236000 | 0.9412 | 0.8689 |
| 0.038 | 9.64 | 236500 | 0.9352 | 0.8695 |
| 0.0355 | 9.66 | 237000 | 0.9362 | 0.8689 |
| 0.04 | 9.68 | 237500 | 0.9403 | 0.8691 |
| 0.0356 | 9.7 | 238000 | 0.9402 | 0.8706 |
| 0.0383 | 9.72 | 238500 | 0.9466 | 0.8692 |
| 0.0534 | 9.74 | 239000 | 0.9378 | 0.8700 |
| 0.0383 | 9.76 | 239500 | 0.9390 | 0.8697 |
| 0.0418 | 9.78 | 240000 | 0.9404 | 0.8694 |
| 0.0335 | 9.8 | 240500 | 0.9390 | 0.8705 |
| 0.0398 | 9.82 | 241000 | 0.9430 | 0.8696 |
| 0.0336 | 9.84 | 241500 | 0.9438 | 0.8698 |
| 0.045 | 9.86 | 242000 | 0.9414 | 0.8703 |
| 0.0401 | 9.88 | 242500 | 0.9425 | 0.8696 |
| 0.0454 | 9.9 | 243000 | 0.9405 | 0.8696 |
| 0.0361 | 9.92 | 243500 | 0.9394 | 0.8696 |
| 0.0458 | 9.94 | 244000 | 0.9400 | 0.8690 |
| 0.0329 | 9.96 | 244500 | 0.9402 | 0.8693 |
| 0.0469 | 9.98 | 245000 | 0.9401 | 0.8691 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.7.1
- Datasets 1.18.3
- Tokenizers 0.11.6
|
alexandrainst/da-sentiment-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"arxiv:1910.09700",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,432 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-small-mlsum_training_sample
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
config: de
split: train
args: de
metrics:
- name: Rouge1
type: rouge
value: 28.2078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-mlsum_training_sample
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9727
- Rouge1: 28.2078
- Rouge2: 19.0712
- Rougel: 26.2267
- Rougelsum: 26.9462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.3193 | 1.0 | 6875 | 2.1352 | 25.8941 | 17.4672 | 24.2858 | 24.924 |
| 1.2413 | 2.0 | 13750 | 2.0528 | 26.6221 | 18.1166 | 24.8233 | 25.5111 |
| 1.1844 | 3.0 | 20625 | 1.9783 | 27.0518 | 18.3457 | 25.2288 | 25.8919 |
| 1.0403 | 4.0 | 27500 | 1.9487 | 27.8154 | 18.9701 | 25.9435 | 26.6578 |
| 0.9582 | 5.0 | 34375 | 1.9374 | 27.6863 | 18.7723 | 25.7667 | 26.4694 |
| 0.8992 | 6.0 | 41250 | 1.9353 | 27.8959 | 18.919 | 26.0434 | 26.7262 |
| 0.8109 | 7.0 | 48125 | 1.9492 | 28.0644 | 18.8873 | 26.0628 | 26.757 |
| 0.7705 | 8.0 | 55000 | 1.9727 | 28.2078 | 19.0712 | 26.2267 | 26.9462 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
alexandrainst/da-subjectivivity-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"dataset:DDSC/twitter-sent",
"dataset:DDSC/europarl",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 846 | null | ---
tags:
- adapter-transformers
- bert
datasets:
- glue
language:
- en
---
# Adapter `SALT-NLP/pfadapter-bert-base-uncased-qqp-combined-value` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("SALT-NLP/pfadapter-bert-base-uncased-qqp-combined-value", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
Dablio/Dablio | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### black and white design on Stable Diffusion
This is the `<PM_style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,907 | null |
---
language:
- pt
thumbnail: "Portuguese BERT for the Legal Domain"
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
datasets:
- assin
- assin2
- stsb_multi_mt
- rufimelo/PortugueseLegalSentences-v0
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
example_title: "Example 1"
model-index:
- name: BERTimbau
results:
- task:
name: STS
type: STS
metrics:
- name: Pearson Correlation - assin Dataset
type: Pearson Correlation
value: 0.75481
- name: Pearson Correlation - assin2 Dataset
type: Pearson Correlation
value: 0.80262
- name: Pearson Correlation - stsb_multi_mt pt Dataset
type: Pearson Correlation
value: 0.82178
---
# rufimelo/Legal-BERTimbau-sts-base-ma
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
rufimelo/rufimelo/Legal-BERTimbau-sts-base-ma is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) alrge.
It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-base-ma-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma-v2')
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results STS
| Model| Assin | Assin2|stsb_multi_mt pt| avg|
| ---------------------------------------- | ---------- | ---------- |---------- |---------- |
| Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462|
| Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886|
| Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307|
| Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657|
| Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369|
| Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715|
| Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142|
| Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863|
| Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**|
| Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165|
| Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090|
| Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029|
| Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 |
| ---------------------------------------- | ---------- |---------- |---------- |---------- |
| BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640|
| BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245|
| ---------------------------------------- | ---------- |---------- |---------- |---------- |
| paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429|
| paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682|
## Training
rufimelo/Legal-BERTimbau-sts-base-ma-v2 is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base.
Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation.
For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', the supposed supported language as English and the language to learn was portuguese.
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
## Citing & Authors
If you use this work, please cite:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
``` |
Daivakai/DialoGPT-small-saitama | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-09-19T13:52:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357296670531721
- name: Recall
type: recall
value: 0.9506900033658701
- name: F1
type: f1
value: 0.9431505133984472
- name: Accuracy
type: accuracy
value: 0.9864602342968152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Precision: 0.9357
- Recall: 0.9507
- F1: 0.9432
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0861 | 1.0 | 1756 | 0.0695 | 0.9142 | 0.9293 | 0.9217 | 0.9811 |
| 0.0341 | 2.0 | 3512 | 0.0632 | 0.9256 | 0.9478 | 0.9366 | 0.9856 |
| 0.0178 | 3.0 | 5268 | 0.0621 | 0.9357 | 0.9507 | 0.9432 | 0.9865 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DanBot/TCRsynth | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: fantastic4-finetuned-vi-to-en-PhoMT-demo-T5-NLPHUST-Small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fantastic4-finetuned-vi-to-en-PhoMT-demo-T5-NLPHUST-Small
This model is a fine-tuned version of [NlpHUST/t5-vi-en-small](https://huggingface.co/NlpHUST/t5-vi-en-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| 18.9866 | 1.0 | 2268 | nan | 0.0 |
| 0.0 | 2.0 | 4536 | nan | 0.0 |
| 0.0 | 3.0 | 6804 | nan | 0.0 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DanL/scientific-challenges-and-directions | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"transformers",
"generated_from_trainer"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 134 | null | ---
license: mit
---
### Fold Structure on Stable Diffusion
This is the `<fold-geo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Danbi/distilroberta-base-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Abdulmateen/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Abdulmateen/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.7622
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5232, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 12.2536 | 0 |
| 7.0375 | 1 |
| 6.0369 | 2 |
| 5.4913 | 3 |
| 5.1730 | 4 |
| 4.9774 | 5 |
| 4.8412 | 6 |
| 4.7622 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Danih1502/t5-base-finetuned-en-to-de | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4066
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.57 | 1 | 0.7569 | 0.5417 |
| No log | 1.57 | 2 | 0.5000 | 0.8333 |
| No log | 2.57 | 3 | 0.4066 | 0.875 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Danih1502/t5-small-finetuned-en-to-de | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Darein/Def | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: CoreyMorris/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DarkWolf/kn-electra-small | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 512.00 +/- 131.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga matemato -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga matemato
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Darkecho789/email-gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- asvspoof2019
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-deepfake-0919
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-deepfake-0919
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the asvspoof2019 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3335
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3025 | 1.0 | 1586 | 0.3335 | 0.8974 |
| 0.4214 | 2.0 | 3172 | 0.3331 | 0.8974 |
| 0.4378 | 3.0 | 4758 | 0.3307 | 0.8974 |
| 0.3993 | 4.0 | 6344 | 0.3331 | 0.8974 |
| 0.2839 | 5.0 | 7930 | 0.3315 | 0.8974 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DarkestSky/distilbert-base-uncased-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Brunnya on Stable Diffusion
This is the `<Brunnya>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Darkrider/covidbert_medmarco | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2010.05987",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | null | ---
license: mit
---
### Jos de Kat on Stable Diffusion
This is the `<kat-jos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
Darren/darren | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-sentiment-sentiment-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-sentiment-memes
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9582
- Accuracy: 0.8187
- Precision: 0.8199
- Recall: 0.8187
- F1: 0.8191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4673 | 1.0 | 2147 | 0.4373 | 0.7647 | 0.8180 | 0.7647 | 0.7657 |
| 0.3987 | 2.0 | 4294 | 0.5528 | 0.7783 | 0.8096 | 0.7783 | 0.7804 |
| 0.3194 | 3.0 | 6441 | 0.6432 | 0.7752 | 0.7767 | 0.7752 | 0.7680 |
| 0.2855 | 4.0 | 8588 | 0.6820 | 0.7814 | 0.8034 | 0.7814 | 0.7837 |
| 0.2575 | 5.0 | 10735 | 0.7427 | 0.7720 | 0.8070 | 0.7720 | 0.7741 |
| 0.2154 | 6.0 | 12882 | 0.8225 | 0.7987 | 0.8062 | 0.7987 | 0.8004 |
| 0.2195 | 7.0 | 15029 | 0.8361 | 0.8071 | 0.8086 | 0.8071 | 0.8077 |
| 0.2322 | 8.0 | 17176 | 0.8842 | 0.8056 | 0.8106 | 0.8056 | 0.8069 |
| 0.2102 | 9.0 | 19323 | 0.9188 | 0.8129 | 0.8144 | 0.8129 | 0.8135 |
| 0.1893 | 10.0 | 21470 | 0.9582 | 0.8187 | 0.8199 | 0.8187 | 0.8191 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1
|
DarshanDeshpande/marathi-distilbert | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Darya/layoutlmv2-finetuned-funsd-test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DataikuNLP/distiluse-base-multilingual-cased-v1 | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: mit
---
### Singsing doll on Stable Diffusion
This is the `<singsing>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
DavidAMcIntosh/small-rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-19T16:27:36Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.9213082901554404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1117
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.2832 | 0.8091 |
| 0.2735 | 2.0 | 382 | 0.1570 | 0.8943 |
| 0.1769 | 3.0 | 573 | 0.1117 | 0.9213 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DavidSpaceG/MSGIFSR | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-19T16:28:48Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2886 | 1.0 | 715 | 0.1804 | 0.8293 |
| 0.1458 | 2.0 | 1430 | 0.1574 | 0.8494 |
| 0.0931 | 3.0 | 2145 | 0.1642 | 0.8589 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Davlan/bert-base-multilingual-cased-finetuned-amharic | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 109 | 2022-09-19T16:30:28Z | ---
license: mit
---
### Singsing on Stable Diffusion
This is the `<singsing>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Davlan/bert-base-multilingual-cased-finetuned-igbo | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-09-19T16:46:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2022-09-19T17:00:33Z | ---
language: de
datasets:
- Legal-Entity-Recognition
---
### German BERT for Legal NER
#### Use:
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("harshildarji/gbert-legal-ner", use_auth_token="AUTH_TOKEN")
model = AutoModelForTokenClassification.from_pretrained("harshildarji/gbert-legal-ner", use_auth_token="AUTH_TOKEN")
ner = pipeline("ner", model=model, tokenizer=tokenizer)
example = "1. Das Bundesarbeitsgericht ist gemäß § 9 Abs. 2 Satz 2 ArbGG iVm. § 201 Abs. 1 Satz 2 GVG für die beabsichtigte Klage gegen den Bund zuständig ."
results = ner(example)
print(results)
```
#### Classes:
|Abbreviation|Class|
|----|----|
|PER|Person|
|RR|Judge|
|AN|Lawyer|
|LD|Country|
|ST|City|
|STR|Street|
|LDS|Landscape|
|ORG|Organization|
|UN|Company|
|INN|Institution|
|GRT|Court|
|MRK|Brand|
|GS|Law|
|VO|Ordinance|
|EUN|European legal norm|
|VS|Regulation|
|VT|Contract|
|RS|Court decision|
|LIT|Legal literature|
---
Please reference our work when using the model.
```bibtex
@conference{icaart23,
author={Harshil Darji. and Jelena Mitrović. and Michael Granitzer.},
title={German BERT Model for Legal Named Entity Recognition},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART,},
year={2023},
pages={723-728},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011749400003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}
``` |
Davlan/bert-base-multilingual-cased-finetuned-luo | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Davlan/bert-base-multilingual-cased-finetuned-swahili | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 67 | null | ---
thumbnail: "url to a thumbnail used in social sharing"
library_name: keras
tags:
- keras
widget:
- src: https://huggingface.co/datasets/test_cats/cifar1.jpg
example_title: Tiger
--- |
Davlan/bert-base-multilingual-cased-finetuned-wolof | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-09-19T17:18:44Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1755 | 0.8272 |
| 0.1561 | 2.0 | 1670 | 0.1441 | 0.8727 |
| 0.1016 | 3.0 | 2505 | 0.1348 | 0.8844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Davlan/bert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"bert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 88 | 2022-09-19T17:46:12Z | ---
license: mit
---
### F-22 on Stable Diffusion
This is the `<f-22>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Davlan/bert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 269,898 | 2022-09-19T17:49:30Z | ---
language: en
thumbnail: http://www.huggingtweets.com/chriscantino/1663609825906/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1554673291570212864/RWwZGZ1E_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cantino.eth</div>
<div style="text-align: center; font-size: 14px;">@chriscantino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cantino.eth.
| Data | cantino.eth |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 38 |
| Short tweets | 666 |
| Tweets kept | 2546 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3owatwcf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chriscantino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2b5rm6g9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2b5rm6g9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chriscantino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Davlan/byt5-base-eng-yor-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
---
### Jin Kisaragi on Stable Diffusion
This is the `<jin-kisaragi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
Davlan/byt5-base-yor-eng-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-09-19T17:59:14Z | ---
license: mit
---
### Depthmap Style on Stable Diffusion
This is the `<depthmap>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
Davlan/distilbert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 123,856 | 2022-09-19T18:01:57Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
language:
- id
library_name: sentence-transformers
---
# indo-sentence-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Ibukota Perancis adalah Paris",
"Menara Eifel terletak di Paris, Perancis",
"Pizza adalah makanan khas Italia",
"Saya kuliah di Carneige Mellon University"]
model = SentenceTransformer('firqaaa/indo-sentence-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Ibukota Perancis adalah Paris",
"Menara Eifel terletak di Paris, Perancis",
"Pizza adalah makanan khas Italia",
"Saya kuliah di Carneige Mellon University"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('firqaaa/indo-sentence-bert-base')
model = AutoModel.from_pretrained('firqaaa/indo-sentence-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9930,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
`@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}` |
Davlan/m2m100_418M-eng-yor-mt | [
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-09-19T18:17:04Z | ---
license: mit
---
### crested gecko on Stable Diffusion
This is the `<crested-gecko>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Davlan/m2m100_418M-yor-eng-mt | [
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-09-19T18:17:47Z | ---
license: mit
---
### GrisStyle on Stable Diffusion
This is the `<gris>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





























|
Davlan/mbart50-large-yor-eng-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 425.50 +/- 151.35
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adil-o -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga adil-o
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-09-19T18:58:07Z | ---
license: mit
---
### ikea-fabler on Stable Diffusion
This is the `<ikea-fabler>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Davlan/xlm-roberta-base-finetuned-kinyarwanda | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 61 | null | ---
license: mit
---
### Joe Mad on Stable Diffusion
This is the `<joe-mad>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
Davlan/xlm-roberta-base-masakhaner | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-09-19T22:35:16Z | ---
language:
- en
tags:
- text-classification
license: cc0-1.0
library: Transformers
widget:
- text: "sdfsdfa"
example_title: "Gibberish"
- text: "idkkkkk"
example_title: "Uncertainty"
- text: "Because you asked"
example_title: "Refusal"
- text: "I am a cucumber"
example_title: "High-risk"
- text: "My job went remote and I needed to take care of my kids"
example_title: "Valid"
---
# SANDS
_Semi-Automated Non-response Detection for Surveys_
Non-response detection designed to be used for open-ended survey text in conjunction with human reviewers.
## Model Details
Model Description: This model is a fine-tuned version of the supervised SimCSE BERT base uncased model. It was introduced at [AAPOR](https://www.aapor.org/) 2022 at the talk _Toward a Semi-automated item nonresponse detector model for open-response data_. The model is uncased, so it treats `important`, `Important`, and `ImPoRtAnT` the same.
* Developed by: [National Center for Health Statistics](https://www.cdc.gov/nchs/index.htm), Centers for Disease Control and Prevention
* Model Type: Text Classification
* Language(s): English
* License: Apache-2.0
Parent Model: For more details about SimCSE, we encourage users to check out the SimCSE [Github repository](https://github.com/princeton-nlp/SimCSE), and the [base model](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) on HuggingFace.
## How to Get Started with the Model
### Example of classification of a set of responses:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
# Load the model
model_location = "NCHS/SANDS"
model = AutoModelForSequenceClassification.from_pretrained(model_location)
tokenizer = AutoTokenizer.from_pretrained(model_location)
# Create example responses to test
responses = [
"sdfsdfa",
"idkkkkk",
"Because you asked",
"I am a cucumber",
"My job went remote and I needed to take care of my kids",
]
# Run the model and compute a score for each response
with torch.no_grad():
tokens = tokenizer(responses, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)
scores = torch.softmax(output.logits, dim=1).numpy()
# Display the scores in a table
columns = ["Gibberish", "Uncertainty", "Refusal", "High-risk", "Valid"]
df = pd.DataFrame(scores, columns=columns)
df.index.name = "Response"
print(df)
```
|Response| Gibberish| Uncertainty| Refusal| High-risk| Valid|
|--------|---------------|-----------------|-----------|-----------------|-----------|
|sdfsdfa| 0.998| 0.000| 0.000| 0.000| 0.000|
|idkkkkk| 0.002| 0.995| 0.001| 0.001| 0.001|
|Because you asked| 0.001| 0.001| 0.976| 0.006| 0.014|
|I am a cucumber| 0.001| 0.001| 0.002| 0.797| 0.178|
|My job went remote and I needed to take care of my kids| 0.000| 0.000| 0.000| 0.000| 1.000|
Alternatively, you can load the model using a pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification", "NCHS/SANDS")
print( pipe(responses) )
```
```python
[{'label': 'Gibberish', 'score': 0.9978908896446228},
{'label': 'Uncertainty', 'score': 0.9950007796287537},
{'label': 'Refusal', 'score': 0.9775006771087646},
{'label': 'High-risk', 'score': 0.9804121255874634},
{'label': 'Valid', 'score': 0.9997561573982239}]
```
With the pipeline set `top_k` to see all the full output:
```python
pipe(responses, top_k=5)
```
Finally, if you'd like to use a local GPU set the device to the GPU number (usually 0).
```python
pipe = pipeline("text-classification", "NCHS/SANDS", device=0)
```
## Uses
### Direct Uses
This model is intended to be used on survey responses for data cleaning to help researchers filter out non-responsive responses or junk responses to aid in research and analysis. The model will return a score for a response in 5 different categories: Gibberish, Refusal, Uncertainty, High Risk, and Valid as a probability vector that sums to 1.
### Response types
+ **Gibberish**: Nonsensical response where the respondent entered text without regard for English syntax. Examples: `ksdhfkshgk` and `sadsadsadsadsadsadsad`
+ **Refusal**: Responses with valid English but are either a direct refusal to answer the question asked or a response that provides no contextual relationship to the question asked. Examples: `Because` or `Meow`.
+ **Uncertainty**: Responses where the respondent does not understand the question, does not know the answer to the question, or does not know how to respond to the question. Examples: `I dont know` or `unsure what you are asking`.
+ **High-Risk**: Responses that may be valid depending on the context and content of the question. These responses require human subject matter expertise to classify as a valid response or not. Examples: `Necessity` or `I am a cucumber`
+ **Valid**: Responses that answer the question at hand and provide an insight to the respondents thought on the subject matter of the question. Examples: `COVID began for me when my children’s school went online and I needed to stay home to watch them` or `staying home, avoiding crowds, still wear masks`
## Misuses and Out-of-scope Use
The model has been trained to specifically identify survey non-response in open ended responses where the respondent taking the survey has given a response but their answer does not respond to the question at hand or providing any meaningful insight. Some examples of these types of responses are `meow`, `ksdhfkshgk`, or `idk`. The model was fine-tuned on 3,000 labeled open-ended responses to web probes on questions relating to the COVID-19 pandemic gathered from the [Research and Development Survey or RANDS](https://www.cdc.gov/nchs/rands/index.htm) conducted by the Division of Research and Methodology at the National Center for Health Statistics. Web probes are questions implementing probing techniques from cognitive interviewing for use in survey question design and are different than traditional open-ended survey questions. The context of our labeled responses limited in focus on both COVID and health responses, so responses outside this scope may notice a drop in performance.
The responses the model is trained on are also from both web and phone based open-ended probes. There may be limitations in model effectiveness with more traditional open ended survey questions with responses provided in other mediums.
This model does not assess the factual accuracy of responses or filter out responses with different demographic biases. It was not trained to be factual of people or events and so using the model for such classification is out of scope for the abilities of the model.
We did not train the model to recognize non-response in any language other than English. Responses in languages other than English are out of scope and the model will perform poorly. Any correct classifications are a result of the base SimCSE or Bert Models.
## Risks, Limitations, and Biases
To investigate if there were differences between demographic groups on sensitivity and specificity, we conducted two-tailed Z-tests across demographic groups. These included education (some college or less and bachelor’s or more), sex (male or female), mode (computer or telephone), race and ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic, and all others who are non-Hispanic), and age (18-29, 30-44, 45-59, and 60+). There were 4,813 responses to 3 probes. To control for family-wise error rate, we applied the Bonferroni correction was applied to the alpha level (α < 0.00167).
There were statistically significant differences in specificity between education levels, mode, and White and Black respondents. There were no statistically significant differences in sensitivity. Respondents with some college or less had lower specificity compared to those with more education (0.73 versus 0.80, p < 0.0001). Respondents who used a smartphone or computer to complete their survey had a higher specificity than those who completed the survey over the telephone (0.77 versus 0.70, p < 0.0001). Black respondents had a lower specificity than White respondents (0.65 versus 0.78, p < 0.0001). Effect sizes for education and mode were small (h = 0.17 and h = 0.16, respectively) while the effect size for race was between small and medium (h = 0.28).
As the model was fine-tuned from SimCSE, itself fine-tuned from BERT, it will reproduce all biases inherent in these base models. Due to tokenization, the model may incorrectly classify typos, especially in acronyms. For example: `LGBTQ` is valid, while `LBGTQ` is classified as gibberish.
## Training
#### Training Data
The model was fine-tuned on 3,200 labeled open-ended responses from [RANDS during COVID 19 Rounds 1 and 2](https://www.cdc.gov/nchs/rands/index.htm). The base SimCSE BERT model was trained on BookCorpus and English Wikipedia.
#### Training procedure
+ Learning rate: 5e-5
+ Batch size: 16
+ Number training epochs: 4
+ Base Model pooling dimension: 768
+ Number of labels: 5
## Suggested citation
```bibtex
@misc{cibellihibben2023sands,
title={Semi-Automated Nonresponse Detection for Open-text Survey Data},
author={Kristen Cibelli Hibben, Zachary Smith, Ben Rogers, Valerie Ryan, Paul Scanlon, Kristen Miller, Travis Hoppe},
year={2023},
url={https://huggingface.co/NCHS/SANDS},
doi={ 10.57967/hf/0414 }
}
```
## Open source licence
Model and code, including source files and code samples if any in the content, are released as open source under the [Creative Commons Universal Public Domain](https://creativecommons.org/publicdomain/zero/1.0/). This means you can use the code, model, and content in this repository except for any offical trademarks in your own projects.
Open source projects are made available and contributed to under licenses that include terms that, for the protection of contributors, make clear that the projects are offered "as-is", without warranty, and disclaiming liability for damages resulting from using the projects. This model is no different. The open content license it is offered under includes such terms.
|
Davlan/xlm-roberta-base-wikiann-ner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 235 | 2022-09-19T23:10:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.94
- name: F1
type: f1
value: 0.940258456511073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1560
- Accuracy: 0.94
- F1: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1000 | 0.2056 | 0.928 | 0.9284 |
| 0.3151 | 2.0 | 2000 | 0.1560 | 0.94 | 0.9403 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.10.2+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Dayout/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### wojaks-now on Stable Diffusion
This is the `<red-wojak>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
Dbluciferm3737/Idk | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-20T00:43:52Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/994592419705274369/RLplF55e_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571030673078591490/TqoPeGER_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511102924310544387/j6E29xq6_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MrBeast & xQc & Mark</div>
<div style="text-align: center; font-size: 14px;">@markiplier-mrbeast-xqc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MrBeast & xQc & Mark.
| Data | MrBeast | xQc | Mark |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 3241 | 3226 |
| Retweets | 119 | 116 | 306 |
| Short tweets | 725 | 410 | 392 |
| Tweets kept | 2404 | 2715 | 2528 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3p1p4x3v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markiplier-mrbeast-xqc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13fbl2ac) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13fbl2ac/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/markiplier-mrbeast-xqc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DecafNosebleed/ScaraBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ricardmc99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Declan/FoxNews_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-09-20T07:59:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: michael20at/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Declan/FoxNews_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: S1d-dha-nth3/ncert_bio
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# S1d-dha-nth3/ncert_bio
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6150
- Validation Loss: 2.5873
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -647, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5434 | 2.8928 | 0 |
| 2.9142 | 2.6476 | 1 |
| 2.6884 | 2.5008 | 2 |
| 2.6079 | 2.5775 | 3 |
| 2.5748 | 2.5737 | 4 |
| 2.6031 | 2.5074 | 5 |
| 2.6237 | 2.5028 | 6 |
| 2.5849 | 2.5862 | 7 |
| 2.6154 | 2.4751 | 8 |
| 2.5584 | 2.4866 | 9 |
| 2.6107 | 2.5268 | 10 |
| 2.5852 | 2.5659 | 11 |
| 2.5915 | 2.5768 | 12 |
| 2.5678 | 2.7020 | 13 |
| 2.6150 | 2.5873 | 14 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/FoxNews_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- RUDOLPH
- text-image
- image-text
- decoder
---
# RUDOLPH-2.7B (XL)
RUDOLPH: One Hyper-Tasking Transformer Can be Creative as DALL-E and GPT-3 and Smart as CLIP
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/RUDOLPH.png" width=60% border="2"/>
Model was trained by [Sber AI](https://github.com/sberbank-ai) team.
# Model Description
**RU**ssian **D**ecoder **O**n **L**anguage **P**icture **H**yper-tasking (**RUDOLPH**) **2.7B** is the largest text-image-text transformer designed for an easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
*Hyper-tasking model is a generalized multi-tasking model, i.e., the model that can solve almost all tasks within supported modalities, mandatory including mutual pairwise translations between modalities (two modalities in case of RUDOLPH: images and Russian texts).*
* Tasks: ` text2image generation, self reranking, text ranking, image ranking, image2text generation, zero-shot image classification, text2text generation, text qa, math qa, image captioning, image generation, text recognition in the wild, visual qa, and so on`
* Language: ` Russian`
* Type: ` decoder`
* Num Parameters: ` 2.7B`
* Training Data Volume: ` 119 million text-image pairs, 60 million text paragraphs`
# Details of architecture
<img src=https://raw.githubusercontent.com/ai-forever/ru-dolph/master/pics/scheme-rudolph_27B.jpg height="20" border="2"/>
The maximum sequence length that this model may be used with depends on the modality and stands for 384 - 576 - 128 for the left text tokens, image tokens, and right text tokens, respectively.
RUDOLPH 2.7B is a Transformer-based decoder model with the following parameters:
* num\_layers (32) — Number of hidden layers in the Transformer decoder.
* hidden\_size (2560) — Dimensionality of the hidden layers.
* num\_attention\_heads (32) — Number of attention heads for each attention layer.
# Sparse Attention Masks
The primary proposed method is to modify the sparse transformer's attention mask to better control modalities. It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.
<img src="https://raw.githubusercontent.com/lizagonch/ru-dolph/develop_v1/pics/attention_masks_2700m.png" height="20" border="2"/>
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov) |
Declan/NewYorkTimes_model_v1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Jinjoon Lee, They on Stable Diffusion
This is the `<jinjoon_lee_they>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Declan/Politico_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: evegarcianz/bert-finetuned-adversarial_qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# evegarcianz/bert-finetuned-adversarial_qa
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8737
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11406, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.5437 | 0 |
| 1.8737 | 1 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/WallStreetJournal_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-sentiment-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-sentiment-finetuned-memes
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1824
- Accuracy: 0.8270
- Precision: 0.8270
- Recall: 0.8270
- F1: 0.8270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5224 | 1.0 | 4293 | 0.5321 | 0.7720 | 0.8084 | 0.7720 | 0.7721 |
| 0.4386 | 2.0 | 8586 | 0.4930 | 0.7961 | 0.7980 | 0.7961 | 0.7967 |
| 0.3722 | 3.0 | 12879 | 0.7652 | 0.7925 | 0.7955 | 0.7925 | 0.7932 |
| 0.3248 | 4.0 | 17172 | 0.9827 | 0.8045 | 0.8047 | 0.8045 | 0.8023 |
| 0.308 | 5.0 | 21465 | 0.9518 | 0.8244 | 0.8260 | 0.8244 | 0.8249 |
| 0.2906 | 6.0 | 25758 | 1.0971 | 0.8155 | 0.8166 | 0.8155 | 0.8159 |
| 0.2036 | 7.0 | 30051 | 1.1457 | 0.8260 | 0.8271 | 0.8260 | 0.8264 |
| 0.1747 | 8.0 | 34344 | 1.1824 | 0.8270 | 0.8270 | 0.8270 | 0.8270 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/WallStreetJournal_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
---
### liliana on Stable Diffusion
This is the `<liliana>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
DeepPavlov/distilrubert-base-cased-conversational | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6,324 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 500001 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 50000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
DeepPavlov/marianmt-tatoeba-ruen | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2022-09-20T12:03:53Z | ---
license: mit
---
### wheatland-ARKNIGHT on Stable Diffusion
This is the `<golden-wheats-fields>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
Denilson/gbert-base-germaner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### 001glitch_core on Stable Diffusion
This is the `001glitch_core` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
Deniskin/emailer_medium_300 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Despin89/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-20T12:52:20Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/bi_all-mpnet-base-v2_finetuned_WebNLG2017
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/bi_all-mpnet-base-v2_finetuned_WebNLG2017')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/bi_all-mpnet-base-v2_finetuned_WebNLG2017)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 666 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "better_cross_encoder.PearsonCorrelationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-06
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 3330,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Dibyaranjan/nl_image_search | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-20T13:14:52Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Digakive/Hsgshs | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-20T13:29:14Z | ---
license: apache-2.0
---
# Model description
This is an [t5-base](https://huggingface.co/t5-base) model, finetuned to generate questions given a table and linked passages using [HybridQA](https://huggingface.co/datasets/hybrid_qa) dataset. It was trained to generate questions from reasoning paths extracted from hybrid input, i.e., a table and the passages linked to the table cells.
# Overview
*Language model*: t5-base \
*Language*: English \
*Task*: Hybrid Question Generation \
*Data*: HybridQA
# Intented use and limitations
One can use this model to generate questions given a table and linked passages. Biases associated with pre-training of T5 and HybridQA dataset may be present.
|
DimaOrekhov/transformer-method-name | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-sentiment-finetuned-memes-20epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-sentiment-finetuned-memes-20epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2252
- Accuracy: 0.8160
- Precision: 0.8165
- Recall: 0.8160
- F1: 0.8162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5319 | 1.0 | 4293 | 0.5560 | 0.7699 | 0.7777 | 0.7699 | 0.7704 |
| 0.4627 | 2.0 | 8586 | 0.6588 | 0.7856 | 0.7868 | 0.7856 | 0.7860 |
| 0.3735 | 3.0 | 12879 | 0.8583 | 0.7835 | 0.7854 | 0.7835 | 0.7813 |
| 0.3549 | 4.0 | 17172 | 1.0078 | 0.7872 | 0.7869 | 0.7872 | 0.7866 |
| 0.2995 | 5.0 | 21465 | 1.0007 | 0.8024 | 0.8030 | 0.8024 | 0.8012 |
| 0.2579 | 6.0 | 25758 | 1.1501 | 0.8150 | 0.8152 | 0.8150 | 0.8151 |
| 0.2078 | 7.0 | 30051 | 1.2604 | 0.8097 | 0.8137 | 0.8097 | 0.8102 |
| 0.1635 | 8.0 | 34344 | 1.3755 | 0.8092 | 0.8092 | 0.8092 | 0.8085 |
| 0.1453 | 9.0 | 38637 | 1.4639 | 0.8097 | 0.8137 | 0.8097 | 0.8102 |
| 0.1431 | 10.0 | 42930 | 1.5612 | 0.8050 | 0.8048 | 0.8050 | 0.8044 |
| 0.085 | 11.0 | 47223 | 1.8216 | 0.8097 | 0.8121 | 0.8097 | 0.8101 |
| 0.0693 | 12.0 | 51516 | 1.7761 | 0.8087 | 0.8100 | 0.8087 | 0.8090 |
| 0.041 | 13.0 | 55809 | 1.8538 | 0.8082 | 0.8083 | 0.8082 | 0.8082 |
| 0.0391 | 14.0 | 60102 | 2.0022 | 0.8160 | 0.8158 | 0.8160 | 0.8158 |
| 0.0299 | 15.0 | 64395 | 2.0101 | 0.8124 | 0.8121 | 0.8124 | 0.8121 |
| 0.0226 | 16.0 | 68688 | 2.0396 | 0.8150 | 0.8152 | 0.8150 | 0.8151 |
| 0.0229 | 17.0 | 72981 | 2.1071 | 0.8171 | 0.8170 | 0.8171 | 0.8171 |
| 0.0133 | 18.0 | 77274 | 2.1047 | 0.8181 | 0.8182 | 0.8181 | 0.8182 |
| 0.0268 | 19.0 | 81567 | 2.2037 | 0.8208 | 0.8208 | 0.8208 | 0.8208 |
| 0.0068 | 20.0 | 85860 | 2.2252 | 0.8160 | 0.8165 | 0.8160 | 0.8162 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DongHyoungLee/kogpt2-base-v2-finetuned-kogpt2_nsmc_single_sentence_classification | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-sentiment-finetuned-memes-30epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-sentiment-finetuned-memes-30epochs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8839
- Accuracy: 0.8365
- Precision: 0.8373
- Recall: 0.8365
- F1: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4774 | 1.0 | 2147 | 0.4463 | 0.7453 | 0.7921 | 0.7453 | 0.7468 |
| 0.4036 | 2.0 | 4294 | 0.5419 | 0.7835 | 0.8072 | 0.7835 | 0.7858 |
| 0.3163 | 3.0 | 6441 | 0.6776 | 0.7982 | 0.7970 | 0.7982 | 0.7954 |
| 0.2613 | 4.0 | 8588 | 0.6988 | 0.7966 | 0.7953 | 0.7966 | 0.7956 |
| 0.229 | 5.0 | 10735 | 0.8523 | 0.8003 | 0.8033 | 0.8003 | 0.8013 |
| 0.1893 | 6.0 | 12882 | 1.0472 | 0.8056 | 0.8166 | 0.8056 | 0.8074 |
| 0.1769 | 7.0 | 15029 | 1.0321 | 0.8150 | 0.8193 | 0.8150 | 0.8161 |
| 0.1648 | 8.0 | 17176 | 1.1623 | 0.8129 | 0.8159 | 0.8129 | 0.8138 |
| 0.1366 | 9.0 | 19323 | 1.1932 | 0.8255 | 0.8257 | 0.8255 | 0.8256 |
| 0.1191 | 10.0 | 21470 | 1.2308 | 0.8349 | 0.8401 | 0.8349 | 0.8361 |
| 0.1042 | 11.0 | 23617 | 1.3166 | 0.8297 | 0.8288 | 0.8297 | 0.8281 |
| 0.0847 | 12.0 | 25764 | 1.3542 | 0.8286 | 0.8278 | 0.8286 | 0.8280 |
| 0.0785 | 13.0 | 27911 | 1.3925 | 0.8291 | 0.8293 | 0.8291 | 0.8292 |
| 0.0674 | 14.0 | 30058 | 1.4191 | 0.8255 | 0.8307 | 0.8255 | 0.8267 |
| 0.0694 | 15.0 | 32205 | 1.5601 | 0.8255 | 0.8281 | 0.8255 | 0.8263 |
| 0.0558 | 16.0 | 34352 | 1.6110 | 0.8265 | 0.8302 | 0.8265 | 0.8275 |
| 0.045 | 17.0 | 36499 | 1.5730 | 0.8270 | 0.8303 | 0.8270 | 0.8280 |
| 0.0436 | 18.0 | 38646 | 1.6081 | 0.8365 | 0.8361 | 0.8365 | 0.8363 |
| 0.028 | 19.0 | 40793 | 1.5569 | 0.8375 | 0.8371 | 0.8375 | 0.8373 |
| 0.0262 | 20.0 | 42940 | 1.6976 | 0.8286 | 0.8324 | 0.8286 | 0.8296 |
| 0.0183 | 21.0 | 45087 | 1.6368 | 0.8333 | 0.8354 | 0.8333 | 0.8340 |
| 0.0225 | 22.0 | 47234 | 1.7570 | 0.8318 | 0.8357 | 0.8318 | 0.8328 |
| 0.0118 | 23.0 | 49381 | 1.7233 | 0.8360 | 0.8369 | 0.8360 | 0.8363 |
| 0.0152 | 24.0 | 51528 | 1.8027 | 0.8360 | 0.8371 | 0.8360 | 0.8364 |
| 0.0079 | 25.0 | 53675 | 1.7908 | 0.8412 | 0.8423 | 0.8412 | 0.8416 |
| 0.0102 | 26.0 | 55822 | 1.8247 | 0.8344 | 0.8339 | 0.8344 | 0.8341 |
| 0.0111 | 27.0 | 57969 | 1.8123 | 0.8391 | 0.8394 | 0.8391 | 0.8392 |
| 0.0078 | 28.0 | 60116 | 1.8630 | 0.8354 | 0.8352 | 0.8354 | 0.8353 |
| 0.0058 | 29.0 | 62263 | 1.8751 | 0.8339 | 0.8343 | 0.8339 | 0.8341 |
| 0.0028 | 30.0 | 64410 | 1.8839 | 0.8365 | 0.8373 | 0.8365 | 0.8368 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1
|
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-09-20T15:11:39Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -15.80 +/- 20.49
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'adil-o/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: test-category
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9196428656578064
---
# test-category
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apartment

#### caravan

#### hotel room

#### house

#### tent
 |
bert-base-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,621,271 | 2022-09-20T15:35:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-base-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-eli5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 17040 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.0
- Tokenizers 0.12.1
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs2bs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs2bs
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1189
- F1: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2719 | 1.0 | 101 | 0.0878 | 0.8458 |
| 0.0682 | 2.0 | 202 | 0.0678 | 0.8838 |
| 0.0321 | 3.0 | 303 | 0.0617 | 0.9041 |
| 0.0149 | 4.0 | 404 | 0.0709 | 0.9061 |
| 0.0097 | 5.0 | 505 | 0.0766 | 0.9114 |
| 0.0059 | 6.0 | 606 | 0.0803 | 0.9174 |
| 0.0035 | 7.0 | 707 | 0.0845 | 0.9160 |
| 0.0023 | 8.0 | 808 | 0.0874 | 0.9158 |
| 0.0016 | 9.0 | 909 | 0.0928 | 0.9188 |
| 0.0016 | 10.0 | 1010 | 0.0951 | 0.9108 |
| 0.0011 | 11.0 | 1111 | 0.0938 | 0.9178 |
| 0.0009 | 12.0 | 1212 | 0.0945 | 0.9185 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 250.29 +/- 21.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 59,663,489 | 2022-09-20T16:10:09Z | ---
language:
- en
license: cc-by-nc-sa-4.0
tags:
- seq2seq
- relation-extraction
- triple-generation
- entity-linking
- entity-type-linking
- relation-linking
datasets: Babelscape/rebel-dataset
widget:
- text: The Italian Space Agency’s Light Italian CubeSat for Imaging of Asteroids,
or LICIACube, will fly by Dimorphos to capture images and video of the impact
plume as it sprays up off the asteroid and maybe even spy the crater it could
leave behind.
model-index:
- name: knowgl
results:
- task:
type: Relation-Extraction
name: Relation Extraction
dataset:
name: Babelscape/rebel-dataset
type: REBEL
metrics:
- type: re+ macro f1
value: 70.74
name: RE+ Macro F1
---
# KnowGL: Knowledge Generation and Linking from Text
The `knowgl-large` model is trained by combining Wikidata with an extended version of the training data in the [REBEL](https://huggingface.co/datasets/Babelscape/rebel-dataset) dataset. Given a sentence, KnowGL generates triple(s) in the following format:
```
[(subject mention # subject label # subject type) | relation label | (object mention # object label # object type)]
```
If there are more than one triples generated, they are separated by `$` in the output. More details in [Rossiello et al. (AAAI 2023)](https://arxiv.org/pdf/2210.13952.pdf).
The model achieves state-of-the-art results for relation extraction on the REBEL dataset. See results in [Mihindukulasooriya et al. (ISWC 2022)](https://arxiv.org/pdf/2207.05188.pdf).
The generated labels (for the subject, relation, and object) and their types can be directly mapped to Wikidata IDs associated with them.
#### Citation
```bibtex
@inproceedings{knowgl-aaai_2023_demo,
author = {Gaetano Rossiello and
Md. Faisal Mahbub Chowdhury and
Nandana Mihindukulasooriya and
Owen Cornec and
Alfio Gliozzo},
title = {KnowGL: Knowledge Generation and Linking from Text},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence}
year = {2023}
}
```
```bibtex
@inproceedings{DBLP:conf/semweb/Mihindukulasooriya22,
author = {Nandana Mihindukulasooriya and
Mike Sava and
Gaetano Rossiello and
Md. Faisal Mahbub Chowdhury and
Irene Yachbes and
Aditya Gidh and
Jillian Duckwitz and
Kovit Nisar and
Michael Santos and
Alfio Gliozzo},
title = {Knowledge Graph Induction Enabling Recommending and Trend Analysis:
{A} Corporate Research Community Use Case},
booktitle = {{ISWC}},
series = {Lecture Notes in Computer Science},
volume = {13489},
pages = {827--844},
publisher = {Springer},
year = {2022}
}
``` |
bert-large-cased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,214 | 2022-09-20T16:12:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-small-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 11.8922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7555
- Rouge1: 11.8922
- Rouge2: 1.88
- Rougel: 9.6595
- Rougelsum: 10.8308
- Gen Len: 18.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 3.9546 | 1.0 | 34080 | 3.7555 | 11.8922 | 1.88 | 9.6595 | 10.8308 | 18.9911 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bert-large-uncased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76,685 | 2022-09-20T16:22:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- farleyknight/big_patent_5_percent
metrics:
- rouge
model-index:
- name: patent-summarization-allen-led-large-2022-09-20
results:
- task:
name: Summarization
type: summarization
dataset:
name: farleyknight/big_patent_5_percent
type: farleyknight/big_patent_5_percent
config: all
split: train
args: all
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patent-summarization-allen-led-large-2022-09-20
This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on the farleyknight/big_patent_5_percent dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8233
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 128.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.4766 | 0.08 | 5000 | 3.4240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 3.2549 | 0.17 | 10000 | 3.2908 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 3.2295 | 0.25 | 15000 | 3.1862 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 3.1455 | 0.33 | 20000 | 3.1291 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 3.0526 | 0.41 | 25000 | 3.0684 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 3.0024 | 0.5 | 30000 | 3.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.9671 | 0.58 | 35000 | 2.9696 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.9862 | 0.66 | 40000 | 2.9431 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.9168 | 0.75 | 45000 | 2.8989 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.9063 | 0.83 | 50000 | 2.8559 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.8417 | 0.91 | 55000 | 2.8398 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
| 2.7853 | 0.99 | 60000 | 2.8240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
distilbert-base-german-cased | [
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43,667 | null | This is a multilingual model that translates Buddhist Chinese, Tibetan and Pali into English.
Chinese input should be in simplified characters (簡體字).
Tibetan should be input in Wylie transliteration, with "/" as shad and no space between the last word and a shad. For example "gang zag la bdag med par khong du chud pa ni 'jig tshogs la lta ba'i gnyen po yin pas na de spangs na nyon mongs pa thams cad spong bar 'gyur ro//".
Pāli works with IAST transliteration: "Evaṁ me sutaṁ — ekaṁ samayaṁ bhagavā antarā ca rājagahaṁ antarā ca nāḷandaṁ addhānamaggappaṭipanno hoti mahatā bhikkhusaṅghena saddhiṁ pañcamattehi bhikkhusatehi."
Multiple sentences are best translated when each sentence is on a separate line. |
09panesara/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 40 | 2022-09-20T21:12:02Z | # VQGAN-CLIP Overview
A repo for running VQGAN+CLIP locally. This started out as a Katherine Crowson VQGAN+CLIP derived Google colab notebook.
<a href="https://replicate.ai/nerdyrodent/vqgan-clip"><img src="https://img.shields.io/static/v1?label=Replicate&message=Demo and Docker Image&color=blue"></a>
Original notebook: [![Open In Colab][colab-badge]][colab-notebook]
[colab-notebook]: <https://colab.research.google.com/drive/1ZAus_gn2RhTZWzOWUpPERNC0Q8OhZRTZ>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
Some example images:
<img src="./samples/Cartoon3.png" width="256px"></img><img src="./samples/Cartoon.png" width="256px"></img><img src="./samples/Cartoon2.png" width="256px"></img>
<img src="./samples/Bedroom.png" width="256px"></img><img src="./samples/DemonBiscuits.png" width="256px"></img><img src="./samples/Football.png" width="256px"></img>
<img src="./samples/Fractal_Landscape3.png" width="256px"></img><img src="./samples/Games_5.png" width="256px"></img>
Environment:
* Tested on Ubuntu 20.04
* GPU: Nvidia RTX 3090
* Typical VRAM requirements:
* 24 GB for a 900x900 image
* 10 GB for a 512x512 image
* 8 GB for a 380x380 image
You may also be interested in [CLIP Guided Diffusion](https://github.com/nerdyrodent/CLIP-Guided-Diffusion)
## Set up
This example uses [Anaconda](https://www.anaconda.com/products/individual#Downloads) to manage virtual Python environments.
Create a new virtual Python environment for VQGAN-CLIP:
```sh
conda create --name vqgan python=3.9
conda activate vqgan
```
Install Pytorch in the new enviroment:
Note: This installs the CUDA version of Pytorch, if you want to use an AMD graphics card, read the [AMD section below](#using-an-amd-graphics-card).
```sh
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
```
Install other required Python packages:
```sh
pip install ftfy regex tqdm omegaconf pytorch-lightning IPython kornia imageio imageio-ffmpeg einops torch_optimizer
```
Or use the ```requirements.txt``` file, which includes version numbers.
Clone required repositories:
```sh
git clone 'https://github.com/nerdyrodent/VQGAN-CLIP'
cd VQGAN-CLIP
git clone 'https://github.com/openai/CLIP'
git clone 'https://github.com/CompVis/taming-transformers'
```
Note: In my development environment both CLIP and taming-transformers are present in the local directory, and so aren't present in the `requirements.txt` or `vqgan.yml` files.
As an alternative, you can also pip install taming-transformers and CLIP.
You will also need at least 1 VQGAN pretrained model. E.g.
```sh
mkdir checkpoints
curl -L -o checkpoints/vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #ImageNet 16384
curl -L -o checkpoints/vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #ImageNet 16384
```
Note that users of ```curl``` on Microsoft Windows should use double quotes.
The `download_models.sh` script is an optional way to download a number of models. By default, it will download just 1 model.
See <https://github.com/CompVis/taming-transformers#overview-of-pretrained-models> for more information about VQGAN pre-trained models, including download links.
By default, the model .yaml and .ckpt files are expected in the `checkpoints` directory.
See <https://github.com/CompVis/taming-transformers> for more information on datasets and models.
Video guides are also available:
* Linux - https://www.youtube.com/watch?v=1Esb-ZjO7tw
* Windows - https://www.youtube.com/watch?v=XH7ZP0__FXs
### Using an AMD graphics card
Note: This hasn't been tested yet.
ROCm can be used for AMD graphics cards instead of CUDA. You can check if your card is supported here:
<https://github.com/RadeonOpenCompute/ROCm#supported-gpus>
Install ROCm accordng to the instructions and don't forget to add the user to the video group:
<https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html>
The usage and set up instructions above are the same, except for the line where you install Pytorch.
Instead of `pip install torch==1.9.0+cu111 ...`, use the one or two lines which are displayed here (select Pip -> Python-> ROCm):
<https://pytorch.org/get-started/locally/>
### Using the CPU
If no graphics card can be found, the CPU is automatically used and a warning displayed.
Regardless of an available graphics card, the CPU can also be used by adding this command line argument: `-cd cpu`
This works with the CUDA version of Pytorch, even without CUDA drivers installed, but doesn't seem to work with ROCm as of now.
### Uninstalling
Remove the Python enviroment:
```sh
conda remove --name vqgan --all
```
and delete the `VQGAN-CLIP` directory.
## Run
To generate images from text, specify your text prompt as shown in the example below:
```sh
python generate.py -p "A painting of an apple in a fruit bowl"
```
<img src="./samples/A_painting_of_an_apple_in_a_fruitbowl.png" width="256px"></img>
## Multiple prompts
Text and image prompts can be split using the pipe symbol in order to allow multiple prompts.
You can also use a colon followed by a number to set a weight for that prompt. For example:
```sh
python generate.py -p "A painting of an apple in a fruit bowl | psychedelic | surreal:0.5 | weird:0.25"
```
<img src="./samples/Apple_weird.png" width="256px"></img>
Image prompts can be split in the same way. For example:
```sh
python generate.py -p "A picture of a bedroom with a portrait of Van Gogh" -ip "samples/VanGogh.jpg | samples/Bedroom.png"
```
### Story mode
Sets of text prompts can be created using the caret symbol, in order to generate a sort of story mode. For example:
```sh
python generate.py -p "A painting of a sunflower|photo:-1 ^ a painting of a rose ^ a painting of a tulip ^ a painting of a daisy flower ^ a photograph of daffodil" -cpe 1500 -zvid -i 6000 -zse 10 -vl 20 -zsc 1.005 -opt Adagrad -lr 0.15 -se 6000
```
## "Style Transfer"
An input image with style text and a low number of iterations can be used create a sort of "style transfer" effect. For example:
```sh
python generate.py -p "A painting in the style of Picasso" -ii samples/VanGogh.jpg -i 80 -se 10 -opt AdamW -lr 0.25
```
| Output | Style |
| ------------------------------------------------------------- | ----------- |
| <img src="./samples/vvg_picasso.png" width="256px"></img> | Picasso |
| <img src="./samples/vvg_sketch.png" width="256px"></img> | Sketch |
| <img src="./samples/vvg_psychedelic.png" width="256px"></img> | Psychedelic |
A video style transfer effect can be achived by specifying a directory of video frames in `video_style_dir`. Output will be saved in the steps directory, using the original video frame filenames. You can also use this as a sort of "batch mode" if you have a directory of images you want to apply a style to. This can also be combined with Story Mode if you don't wish to apply the same style to every images, but instead roll through a list of styles.
## Feedback example
By feeding back the generated images and making slight changes, some interesting effects can be created.
The example `zoom.sh` shows this by applying a zoom and rotate to generated images, before feeding them back in again.
To use `zoom.sh`, specifying a text prompt, output filename and number of frames. E.g.
```sh
./zoom.sh "A painting of a red telephone box spinning through a time vortex" Telephone.png 150
```
If you don't have ImageMagick installed, you can install it with ```sudo apt install imagemagick```
<img src="./samples/zoom.gif" width="256px"></img>
There is also a simple zoom video creation option available. For example:
```sh
python generate.py -p "The inside of a sphere" -zvid -i 4500 -zse 20 -vl 10 -zsc 0.97 -opt Adagrad -lr 0.15 -se 4500
```
## Random text example
Use `random.sh` to make a batch of images from random text. Edit the text and number of generated images to your taste!
```sh
./random.sh
```
## Advanced options
To view the available options, use "-h".
```sh
python generate.py -h
```
```sh
usage: generate.py [-h] [-p PROMPTS] [-ip IMAGE_PROMPTS] [-i MAX_ITERATIONS] [-se DISPLAY_FREQ]
[-s SIZE SIZE] [-ii INIT_IMAGE] [-in INIT_NOISE] [-iw INIT_WEIGHT] [-m CLIP_MODEL]
[-conf VQGAN_CONFIG] [-ckpt VQGAN_CHECKPOINT] [-nps [NOISE_PROMPT_SEEDS ...]]
[-npw [NOISE_PROMPT_WEIGHTS ...]] [-lr STEP_SIZE] [-cuts CUTN] [-cutp CUT_POW] [-sd SEED]
[-opt {Adam,AdamW,Adagrad,Adamax,DiffGrad,AdamP,RAdam,RMSprop}] [-o OUTPUT] [-vid] [-zvid]
[-zs ZOOM_START] [-zse ZOOM_FREQUENCY] [-zsc ZOOM_SCALE] [-cpe PROMPT_FREQUENCY]
[-vl VIDEO_LENGTH] [-ofps OUTPUT_VIDEO_FPS] [-ifps INPUT_VIDEO_FPS] [-d]
[-aug {Ji,Sh,Gn,Pe,Ro,Af,Et,Ts,Cr,Er,Re} [{Ji,Sh,Gn,Pe,Ro,Af,Et,Ts,Cr,Er,Re} ...]]
[-cd CUDA_DEVICE]
```
```sh
optional arguments:
-h, --help show this help message and exit
-p PROMPTS, --prompts PROMPTS
Text prompts
-ip IMAGE_PROMPTS, --image_prompts IMAGE_PROMPTS
Image prompts / target image
-i MAX_ITERATIONS, --iterations MAX_ITERATIONS
Number of iterations
-se DISPLAY_FREQ, --save_every DISPLAY_FREQ
Save image iterations
-s SIZE SIZE, --size SIZE SIZE
Image size (width height) (default: [512, 512])
-ii INIT_IMAGE, --init_image INIT_IMAGE
Initial image
-in INIT_NOISE, --init_noise INIT_NOISE
Initial noise image (pixels or gradient)
-iw INIT_WEIGHT, --init_weight INIT_WEIGHT
Initial weight
-m CLIP_MODEL, --clip_model CLIP_MODEL
CLIP model (e.g. ViT-B/32, ViT-B/16)
-conf VQGAN_CONFIG, --vqgan_config VQGAN_CONFIG
VQGAN config
-ckpt VQGAN_CHECKPOINT, --vqgan_checkpoint VQGAN_CHECKPOINT
VQGAN checkpoint
-nps [NOISE_PROMPT_SEEDS ...], --noise_prompt_seeds [NOISE_PROMPT_SEEDS ...]
Noise prompt seeds
-npw [NOISE_PROMPT_WEIGHTS ...], --noise_prompt_weights [NOISE_PROMPT_WEIGHTS ...]
Noise prompt weights
-lr STEP_SIZE, --learning_rate STEP_SIZE
Learning rate
-cuts CUTN, --num_cuts CUTN
Number of cuts
-cutp CUT_POW, --cut_power CUT_POW
Cut power
-sd SEED, --seed SEED
Seed
-opt, --optimiser {Adam,AdamW,Adagrad,Adamax,DiffGrad,AdamP,RAdam,RMSprop}
Optimiser
-o OUTPUT, --output OUTPUT
Output file
-vid, --video Create video frames?
-zvid, --zoom_video Create zoom video?
-zs ZOOM_START, --zoom_start ZOOM_START
Zoom start iteration
-zse ZOOM_FREQUENCY, --zoom_save_every ZOOM_FREQUENCY
Save zoom image iterations
-zsc ZOOM_SCALE, --zoom_scale ZOOM_SCALE
Zoom scale
-cpe PROMPT_FREQUENCY, --change_prompt_every PROMPT_FREQUENCY
Prompt change frequency
-vl VIDEO_LENGTH, --video_length VIDEO_LENGTH
Video length in seconds
-ofps OUTPUT_VIDEO_FPS, --output_video_fps OUTPUT_VIDEO_FPS
Create an interpolated video (Nvidia GPU only) with this fps (min 10. best set to 30 or 60)
-ifps INPUT_VIDEO_FPS, --input_video_fps INPUT_VIDEO_FPS
When creating an interpolated video, use this as the input fps to interpolate from (>0 & <ofps)
-d, --deterministic Enable cudnn.deterministic?
-aug, --augments {Ji,Sh,Gn,Pe,Ro,Af,Et,Ts,Cr,Er,Re} [{Ji,Sh,Gn,Pe,Ro,Af,Et,Ts,Cr,Er,Re} ...]
Enabled augments
-cd CUDA_DEVICE, --cuda_device CUDA_DEVICE
Cuda device to use
```
## Troubleshooting
### CUSOLVER_STATUS_INTERNAL_ERROR
For example:
`RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling cusolverDnCreate(handle)`
Make sure you have specified the correct size for the image.
### RuntimeError: CUDA out of memory
For example:
`RuntimeError: CUDA out of memory. Tried to allocate 150.00 MiB (GPU 0; 23.70 GiB total capacity; 21.31 GiB already allocated; 78.56 MiB free; 21.70 GiB reserved in total by PyTorch)`
Your request doesn't fit into your GPU's VRAM. Reduce the image size and/or number of cuts.
## Citations
```bibtex
@misc{unpublished2021clip,
title = {CLIP: Connecting Text and Images},
author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
year = {2021}
}
```
```bibtex
@misc{esser2020taming,
title={Taming Transformers for High-Resolution Image Synthesis},
author={Patrick Esser and Robin Rombach and Björn Ommer},
year={2020},
eprint={2012.09841},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Katherine Crowson - <https://github.com/crowsonkb>
Public Domain images from Open Access Images at the Art Institute of Chicago - <https://www.artic.edu/open-access/open-access-images>
|
AAli/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-21T03:49:54Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8323412698412699
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6176470588235294
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6231454005934718
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7570872707059477
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.874
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6008771929824561
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6226851851851852
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9242127467229169
- name: F1 (macro)
type: f1_macro
value: 0.9198550816036225
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8744131455399061
- name: F1 (macro)
type: f1_macro
value: 0.7269598631142125
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.699349945828819
- name: F1 (macro)
type: f1_macro
value: 0.6904954951631552
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9664046741322946
- name: F1 (macro)
type: f1_macro
value: 0.8975350605960287
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9053588216859918
- name: F1 (macro)
type: f1_macro
value: 0.90414989526156
---
# relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.6176470588235294
- Accuracy on SAT: 0.6231454005934718
- Accuracy on BATS: 0.7570872707059477
- Accuracy on U2: 0.6008771929824561
- Accuracy on U4: 0.6226851851851852
- Accuracy on Google: 0.874
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9242127467229169
- Micro F1 score on CogALexV: 0.8744131455399061
- Micro F1 score on EVALution: 0.699349945828819
- Micro F1 score on K&H+N: 0.9664046741322946
- Micro F1 score on ROOT09: 0.9053588216859918
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8323412698412699
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask>
- loss_function: info_loob
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 22
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob-conceptnet-validated/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Adnan/UrduNewsHeadlines | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-09-21T17:35:44Z | ---
license: mit
---
### Dicoo2 on Stable Diffusion
This is the `<dicoo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_10 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-09-21T18:05:47Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-normal-2000-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-normal-2000-128/tensorboard?#scalars)
|
AhmedSSoliman/MarianCG-CoNaLa | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: mit
---
### DarkPlane on Stable Diffusion
This is the `<DarkPlane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





















|
Ahren09/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: mit
---
### Wildkat on Stable Diffusion
This is the `<wildkat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
AimB/konlpy_berttokenizer_helsinki | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Half-Life 2 Dog on Stable Diffusion
This is the `<hl-dog>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Akashpb13/Galician_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
---
### Midjourney style on Stable Diffusion
This is the `<midjourney-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
Akiva/Joke | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- omarques/autotrain-data-test-dogs-cats
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.7873922658787444
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1527155150
- CO2 Emissions (in grams): 0.7874
## Validation Metrics
- Loss: 0.043
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
Aklily/Lilys | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### maus on Stable Diffusion
This is the `<Maus>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
AkshatSurolia/BEiT-FaceMask-Finetuned | [
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | image-classification | {
"architectures": [
"BeitForImageClassification"
],
"model_type": "beit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 239 | null | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7127976190476191
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29411764705882354
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29080118694362017
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4641467481934408
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.614
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.32456140350877194
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3449074074074074
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8862437848425494
- name: F1 (macro)
type: f1_macro
value: 0.8781526549150734
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8370892018779342
- name: F1 (macro)
type: f1_macro
value: 0.6286516686265566
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5384615384615384
- name: F1 (macro)
type: f1_macro
value: 0.5368027921312294
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9659177853516032
- name: F1 (macro)
type: f1_macro
value: 0.8925325170399768
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8567847069884049
- name: F1 (macro)
type: f1_macro
value: 0.8346603805121989
---
# relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.29411764705882354
- Accuracy on SAT: 0.29080118694362017
- Accuracy on BATS: 0.4641467481934408
- Accuracy on U2: 0.32456140350877194
- Accuracy on U4: 0.3449074074074074
- Accuracy on Google: 0.614
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8862437848425494
- Micro F1 score on CogALexV: 0.8370892018779342
- Micro F1 score on EVALution: 0.5384615384615384
- Micro F1 score on K&H+N: 0.9659177853516032
- Micro F1 score on ROOT09: 0.8567847069884049
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7127976190476191
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <mask>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 1
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
AlanDev/DallEMiniButBetter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8549007936507936
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5641711229946524
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5816023738872403
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5764313507504168
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.822
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5131578947368421
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5162037037037037
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9172819044749133
- name: F1 (macro)
type: f1_macro
value: 0.912178540410085
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8427230046948356
- name: F1 (macro)
type: f1_macro
value: 0.6664365064483144
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6652221018418202
- name: F1 (macro)
type: f1_macro
value: 0.6591956465701904
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9652222299506156
- name: F1 (macro)
type: f1_macro
value: 0.8945528900012115
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8943904732058916
- name: F1 (macro)
type: f1_macro
value: 0.8949174432546955
---
# relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5641711229946524
- Accuracy on SAT: 0.5816023738872403
- Accuracy on BATS: 0.5764313507504168
- Accuracy on U2: 0.5131578947368421
- Accuracy on U4: 0.5162037037037037
- Accuracy on Google: 0.822
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9172819044749133
- Micro F1 score on CogALexV: 0.8427230046948356
- Micro F1 score on EVALution: 0.6652221018418202
- Micro F1 score on K&H+N: 0.9652222299506156
- Micro F1 score on ROOT09: 0.8943904732058916
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8549007936507936
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 30
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Ale/Alen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the query encoder of the MS MARCO BM25 Lexical Model (Λ) from the SPAR paper:
[Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?](https://arxiv.org/abs/2110.06918)
<br>
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta and Wen-tau Yih
<br>
**Meta AI**
The associated github repo is available here: https://github.com/facebookresearch/dpr-scale/tree/main/spar
This model is a BERT-base sized dense retriever trained on the MS MARCO corpus to imitate the behavior of BM25.
The following models are also available:
Pretrained Model | Corpus | Teacher | Architecture | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
Wiki BM25 Λ | Wikipedia | BM25 | BERT-base | facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook/spar-wiki-bm25-lexmodel-context-encoder
PAQ BM25 Λ | PAQ | BM25 | BERT-base | facebook/spar-paq-bm25-lexmodel-query-encoder | facebook/spar-paq-bm25-lexmodel-context-encoder
MARCO BM25 Λ | MS MARCO | BM25 | BERT-base | facebook/spar-marco-bm25-lexmodel-query-encoder | facebook/spar-marco-bm25-lexmodel-context-encoder
MARCO UniCOIL Λ | MS MARCO | UniCOIL | BERT-base | facebook/spar-marco-unicoil-lexmodel-query-encoder | facebook/spar-marco-unicoil-lexmodel-context-encoder
# Using the Lexical Model (Λ) Alone
This model should be used together with the associated query encoder, similar to the [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr) model.
```
import torch
from transformers import AutoTokenizer, AutoModel
# The tokenizer is the same for the query and context encoder
tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 341.3268
score2 = query_emb @ ctx_emb[1] # 340.1626
```
# Using the Lexical Model (Λ) with a Base Dense Retriever as in SPAR
As Λ learns lexical matching from a sparse teacher retriever, it can be used in combination with a standard dense retriever (e.g. [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr#dpr), [Contriever](https://huggingface.co/facebook/contriever-msmarco)) to build a dense retriever that excels at both lexical and semantic matching.
In the following example, we show how to build the SPAR-Wiki model for Open-Domain Question Answering by concatenating the embeddings of DPR and the Wiki BM25 Λ.
```
import torch
from transformers import AutoTokenizer, AutoModel
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
# DPR model
dpr_ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
dpr_query_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
# Wiki BM25 Λ model
lexmodel_tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute DPR embeddings
dpr_query_input = dpr_query_tokenizer(query, return_tensors='pt')['input_ids']
dpr_query_emb = dpr_query_encoder(dpr_query_input).pooler_output
dpr_ctx_input = dpr_ctx_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
dpr_ctx_emb = dpr_ctx_encoder(**dpr_ctx_input).pooler_output
# Compute Λ embeddings
lexmodel_query_input = lexmodel_tokenizer(query, return_tensors='pt')
lexmodel_query_emb = lexmodel_query_encoder(**query_input).last_hidden_state[:, 0, :]
lexmodel_ctx_input = lexmodel_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
lexmodel_ctx_emb = lexmodel_context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Form SPAR embeddings via concatenation
# The concatenation weight is only applied to query embeddings
# Refer to the SPAR paper for details
concat_weight = 0.7
spar_query_emb = torch.cat(
[dpr_query_emb, concat_weight * lexmodel_query_emb],
dim=-1,
)
spar_ctx_emb = torch.cat(
[dpr_ctx_emb, lexmodel_ctx_emb],
dim=-1,
)
# Compute similarity scores
score1 = spar_query_emb @ spar_ctx_emb[0] # 317.6931
score2 = spar_query_emb @ spar_ctx_emb[1] # 314.6144
```
|
Aleenbo/Arcane | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the context encoder of the MS MARCO BM25 Lexical Model (Λ) from the SPAR paper:
[Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?](https://arxiv.org/abs/2110.06918)
<br>
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta and Wen-tau Yih
<br>
**Meta AI**
The associated github repo is available here: https://github.com/facebookresearch/dpr-scale/tree/main/spar
This model is a BERT-base sized dense retriever trained on the MS MARCO corpus to imitate the behavior of BM25.
The following models are also available:
Pretrained Model | Corpus | Teacher | Architecture | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
Wiki BM25 Λ | Wikipedia | BM25 | BERT-base | facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook/spar-wiki-bm25-lexmodel-context-encoder
PAQ BM25 Λ | PAQ | BM25 | BERT-base | facebook/spar-paq-bm25-lexmodel-query-encoder | facebook/spar-paq-bm25-lexmodel-context-encoder
MARCO BM25 Λ | MS MARCO | BM25 | BERT-base | facebook/spar-marco-bm25-lexmodel-query-encoder | facebook/spar-marco-bm25-lexmodel-context-encoder
MARCO UniCOIL Λ | MS MARCO | UniCOIL | BERT-base | facebook/spar-marco-unicoil-lexmodel-query-encoder | facebook/spar-marco-unicoil-lexmodel-context-encoder
# Using the Lexical Model (Λ) Alone
This model should be used together with the associated query encoder, similar to the [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr) model.
```
import torch
from transformers import AutoTokenizer, AutoModel
# The tokenizer is the same for the query and context encoder
tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 341.3268
score2 = query_emb @ ctx_emb[1] # 340.1626
```
# Using the Lexical Model (Λ) with a Base Dense Retriever as in SPAR
As Λ learns lexical matching from a sparse teacher retriever, it can be used in combination with a standard dense retriever (e.g. [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr#dpr), [Contriever](https://huggingface.co/facebook/contriever-msmarco)) to build a dense retriever that excels at both lexical and semantic matching.
In the following example, we show how to build the SPAR-Wiki model for Open-Domain Question Answering by concatenating the embeddings of DPR and the Wiki BM25 Λ.
```
import torch
from transformers import AutoTokenizer, AutoModel
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
# DPR model
dpr_ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
dpr_query_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
# Wiki BM25 Λ model
lexmodel_tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute DPR embeddings
dpr_query_input = dpr_query_tokenizer(query, return_tensors='pt')['input_ids']
dpr_query_emb = dpr_query_encoder(dpr_query_input).pooler_output
dpr_ctx_input = dpr_ctx_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
dpr_ctx_emb = dpr_ctx_encoder(**dpr_ctx_input).pooler_output
# Compute Λ embeddings
lexmodel_query_input = lexmodel_tokenizer(query, return_tensors='pt')
lexmodel_query_emb = lexmodel_query_encoder(**query_input).last_hidden_state[:, 0, :]
lexmodel_ctx_input = lexmodel_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
lexmodel_ctx_emb = lexmodel_context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Form SPAR embeddings via concatenation
# The concatenation weight is only applied to query embeddings
# Refer to the SPAR paper for details
concat_weight = 0.7
spar_query_emb = torch.cat(
[dpr_query_emb, concat_weight * lexmodel_query_emb],
dim=-1,
)
spar_ctx_emb = torch.cat(
[dpr_ctx_emb, lexmodel_ctx_emb],
dim=-1,
)
# Compute similarity scores
score1 = spar_query_emb @ spar_ctx_emb[0] # 317.6931
score2 = spar_query_emb @ spar_ctx_emb[1] # 314.6144
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.