Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | hytting/DialoGPT-medium-Sheldon-1 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | hytting/DialoGPT-medium-Sheldon-2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | hytting/DialoGPT-medium-Sheldon-3 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | hytting/DialoGPT-medium-Sheldon-4 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Jodsa/camembert_clf | null | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Jodsa/camembert_mlm | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Joemar0990/Joemar | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Joguita/Giovanna | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JohnCCM/DialogGPT-small-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Johnnil/model_name | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Johnnil/prestoBERT | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jon/model_name | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jon/testRetailModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# roberta-base-bne-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.7881
<details>
## Model description
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 378 | 0.5534 | 0.7558 |
| 0.6089 | 2.0 | 756 | 0.5315 | 0.7643 |
| 0.2678 | 3.0 | 1134 | 0.7336 | 0.7816 |
| 0.0605 | 4.0 | 1512 | 0.8809 | 0.7866 |
| 0.0605 | 5.0 | 1890 | 0.9415 | 0.7881 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9936726093292236}]
independence_analysis(
"El desafío independentista queda adormecido, y eso que el Gobierno ha sido muy claro en que su propuesta para Cataluña es una agenda de reencuentro, centrada en inversiones e infraestructuras")
# Output:
[{'label': 'AGAINST', 'score': 0.7508948445320129}]
independence_analysis(
"Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.9966907501220703}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(SPANISH).ipynb#scrollTo=uNMOXJz38W6U)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "es", "license": "apache-2.0", "tags": ["spanish"], "datasets": ["catalonia_independence"], "metrics": ["accuracy"], "widget": [{"text": "Junqueras, sobre la decisi\u00f3n judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegar\u00e1 de Europa"}, {"text": "Desconvocada la manifestaci\u00f3n del domingo en Barcelona en apoyo a Puigdemont"}], "model-index": [{"name": "roberta-base-bne-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "args": "spanish"}, "metrics": [{"type": "accuracy", "value": 0.7880893300248138, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "config": "catalan", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.4592039800995025, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.6104489964825159, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.4592039800995025, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.6167123723406555, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.4146479268294389, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.4592039800995025, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.4592039800995025, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.33416407167650636, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.4592039800995025, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.34549318538357193, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 3.393402099609375, "name": "loss", "verified": true}]}]}]} | JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"spanish",
"es",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# roberta-base-bne-finetuned-ciberbullying-spanish
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9607
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1512 | 1.0 | 22227 | 0.9501 | 0.1418 |
| 0.1253 | 2.0 | 44454 | 0.9567 | 0.1499 |
| 0.0973 | 3.0 | 66681 | 0.9594 | 0.1397 |
| 0.0658 | 4.0 | 88908 | 0.9607 | 0.1657 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-ciberbullying-spanish"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Desde que te vi me enamoré de ti."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9995710253715515}]
bullying_analysis(
"Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9918262958526611}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(SPANISH).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "es", "tags": ["spanish"], "metrics": ["accuracy"], "widget": [{"text": "Eres mas peque\u00f1o que un pitufo!"}, {"text": "Eres muy feo!"}, {"text": "Odio tu forma de hablar!"}, {"text": "Eres tan fea que cuando eras peque\u00f1a te echaban de comer por debajo de la puerta."}]} | JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"spanish",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2869
- Accuracy: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3222 | 1.0 | 1255 | 0.2869 | 0.9012 |
| 0.2418 | 2.0 | 2510 | 0.3125 | 0.8987 |
| 0.1726 | 3.0 | 3765 | 0.4120 | 0.8943 |
| 0.0685 | 4.0 | 5020 | 0.5239 | 0.8919 |
| 0.0245 | 5.0 | 6275 | 0.5910 | 0.8947 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-bne-finetuned-mnli", "results": []}]} | JonatanGk/roberta-base-bne-finetuned-hate-speech-offensive-spanish | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9924 | 1.0 | 1196 | 0.8670 |
| 0.474 | 2.0 | 2392 | 0.8923 |
| 0.1637 | 3.0 | 3588 | 1.2066 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "roberta-base-bne-finetuned-sqac", "results": []}]} | JonatanGk/roberta-base-bne-finetuned-sqac | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:sqac",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# roberta-base-ca-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7612
<details>
## Training and evaluation data
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 377 | 0.6311 | 0.7453 |
| 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 |
| 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 |
| 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 |
| 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
)
# Output:
[{'label': 'AGAINST', 'score': 0.7457581758499146}]
independence_analysis(
"Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.7436802983283997}]
independence_analysis(
"Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9040119647979736}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "ca", "license": "apache-2.0", "tags": ["catalan"], "datasets": ["catalonia_independence"], "metrics": ["accuracy"], "widget": [{"text": "Puigdemont, a l'estat espanyol: Quatre anys despr\u00e9s, ens hem guanyat el dret a dir prou"}, {"text": "Llarena demana la detenci\u00f3 de Com\u00edn i Ponsat\u00ed aprofitant que s\u00f3n a It\u00e0lia amb Puigdemont"}, {"text": "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. \u00c9s a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, t\u00e9 un sentiment excloent, nom\u00e9s se senten catalans, i un 4% sol espanyol."}], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "args": "catalan"}, "metrics": [{"type": "accuracy", "value": 0.7611940298507462, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "config": "catalan", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7208955223880597, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.7532458247651523, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.7208955223880597, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.7367396361532118, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.6880645531209203, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.7208955223880597, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.7208955223880597, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.7013044744309381, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.7208955223880597, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.713640086434487, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 0.6895929574966431, "name": "loss", "verified": true}]}]}]} | JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"catalan",
"ca",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | # roberta-base-ca-finetuned-cyberbullying-catalan
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9665
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at [roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish)
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-ciberbullying-catalan"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Des que et vaig veure m'en vaig enamorar de tu."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9996786117553711}]
bullying_analysis(
"Ets tan lletja que et donaven de menjar per sota la porta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9927878975868225}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(CATALAN).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
| {"language": "ca", "tags": ["catalan"], "metrics": ["accuracy"], "widget": [{"text": "Ets m\u00e9s petita que un barrufet!!"}, {"text": "Ets tan lletja que et donaven de menjar per sota la porta."}]} | JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"ca",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.8778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 |
| 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 |
| 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 |
| 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 |
| 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": []}]} | JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the tecla dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9354
- Accuracy: 0.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8465 | 1.0 | 6888 | 0.8222 | 0.6990 |
| 0.6966 | 2.0 | 13776 | 0.7872 | 0.7157 |
| 0.5643 | 3.0 | 20664 | 0.8060 | 0.7268 |
| 0.4435 | 4.0 | 27552 | 0.8470 | 0.7333 |
| 0.3206 | 5.0 | 34440 | 0.9354 | 0.7362 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tecla"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tecla", "type": "tecla", "args": "tecla"}, "metrics": [{"type": "accuracy", "value": 0.7361816335412737, "name": "Accuracy"}]}]}]} | JonatanGk/roberta-base-ca-finetuned-tecla | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:tecla",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JonathanCmitchell/model_name | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JonathanLehner/Chatbot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JonathanSum/another-dummy-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JonathanSum/code-search-net-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | JonathanSum/dummy-model | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | This is a dummy model. | {} | JonathanSum/new-dummy-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JonathanSum/wav2vec2-large-xls-r-300m-zh-HK-colab_round | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # Barney Calhoun DialoGPT Model | {"tags": ["conversational"]} | Jonesy/DialoGPT-medium_Barney | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Family Guy DialoGPT Model | {"tags": ["conversational"]} | Jonesy/FG_OLD | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Johnny Test DialoGPT Model | {"tags": ["conversational"]} | Jonesy/DialoGPT-small_JT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jonghyun/model_test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Joragasy/SmartLayers-finetuned-ner | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Joragasy/custom_ner_model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish MLSum for summarization.
You can use it with the command "summarize:"
| {"language": "es"} | JorgeSarry/est5-summarize | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish WikiEdits for sentence simplification.
You can use it with the command "simplify:"
| {"language": "es"} | JorgeSarry/est5base-simplify | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
The original model has 582M parameters, with 384M of them being input and output embeddings.
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
| {"language": "es"} | JorgeSarry/est5base | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9252
- Recall: 0.9330
- F1: 0.9291
- Accuracy: 0.9848
## Model description
More information needed
## limitations
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0863 | 0.8827 | 0.8969 | 0.8898 | 0.9773 |
| No log | 2.0 | 440 | 0.0652 | 0.8951 | 0.9199 | 0.9073 | 0.9809 |
| 0.1243 | 3.0 | 660 | 0.0626 | 0.9191 | 0.9208 | 0.9200 | 0.9827 |
| 0.1243 | 4.0 | 880 | 0.0585 | 0.9227 | 0.9281 | 0.9254 | 0.9843 |
| 0.0299 | 5.0 | 1100 | 0.0626 | 0.9252 | 0.9330 | 0.9291 | 0.9848 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "My name is Scott and I live in Columbus."}, {"text": "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."}], "base_model": "albert-base-v2", "model-index": [{"name": "albert-base-v2-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9252213840603477, "name": "Precision"}, {"type": "recall", "value": 0.9329732113328189, "name": "Recall"}, {"type": "f1", "value": 0.9290811285541773, "name": "F1"}, {"type": "accuracy", "value": 0.9848205157332728, "name": "Accuracy"}]}]}]} | Jorgeutd/albert-base-v2-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
## bert-base-uncased
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Text Classification(adverse drug effects detection).
## Hyperparameters
```json
{
"do_eval": true,
"do_train": true,
"fp16": true,
"load_best_model_at_end": true,
"model_name": "bert-base-uncased",
"num_train_epochs": 10,
"per_device_eval_batch_size": 16,
"per_device_train_batch_size": 16,
"learning_rate":5e-5
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.9298021697511167 |
| eval_auc | 0.8902672664394546 |
| eval_f1 | 0.827315541601256 |
| eval_loss | 0.17835010588169098 |
| eval_recall | 0.8234375 |
| eval_precision | 0.831230283911672 |
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
```
""" | {"language": "en", "license": "apache-2.0", "tags": ["sagemaker", "bert-base-uncased", "text classification"], "datasets": ["adecorpusv2"], "widget": [{"text": "I got a rash from taking acetaminophen"}], "model-index": [{"name": "BERT-ade_corpus", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ade_corpus_v2Ade_corpus_v2_classification", "type": "ade_corpus"}, "metrics": [{"type": "accuracy", "value": 92.98, "name": "Validation Accuracy"}, {"type": "f1", "value": 82.73, "name": "Validation F1"}]}]}]} | Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2 | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sagemaker",
"bert-base-uncased",
"text classification",
"en",
"dataset:adecorpusv2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-surveyclassification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a custom survey dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2818
- Accuracy: 0.9097
- F1: 0.9097
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.
#### How to use
You can use this model with Transformers *pipeline* for Text Classification.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
model = AutoModelForSequenceClassification.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
text_classifier = pipeline("text-classification", model=model,tokenizer=tokenizer, device=0)
example = "The agent on the phone was very helpful and nice to me."
results = text_classifier(example)
print(results)
```
## Training and evaluation data
Custom survey dataset.
## Training procedure
SageMaker notebook instance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4136 | 1.0 | 902 | 0.2818 | 0.9097 | 0.9097 |
| 0.2213 | 2.0 | 1804 | 0.2990 | 0.9077 | 0.9077 |
| 0.1548 | 3.0 | 2706 | 0.3507 | 0.9026 | 0.9026 |
| 0.1034 | 4.0 | 3608 | 0.4692 | 0.9011 | 0.9011 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "widget": [{"text": "The agent on the phone was very helpful and nice to me."}], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-surveyclassification", "results": []}]} | Jorgeutd/bert-base-uncased-finetuned-surveyclassification | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-ner
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
- Precision: 0.9505
- Recall: 0.9575
- F1: 0.9540
- Accuracy: 0.9886
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1997 | 1.0 | 878 | 0.0576 | 0.9316 | 0.9257 | 0.9286 | 0.9837 |
| 0.04 | 2.0 | 1756 | 0.0490 | 0.9400 | 0.9513 | 0.9456 | 0.9870 |
| 0.0199 | 3.0 | 2634 | 0.0557 | 0.9436 | 0.9540 | 0.9488 | 0.9879 |
| 0.0112 | 4.0 | 3512 | 0.0602 | 0.9443 | 0.9569 | 0.9506 | 0.9881 |
| 0.0068 | 5.0 | 4390 | 0.0631 | 0.9451 | 0.9589 | 0.9520 | 0.9882 |
| 0.0044 | 6.0 | 5268 | 0.0638 | 0.9510 | 0.9567 | 0.9538 | 0.9885 |
| 0.003 | 7.0 | 6146 | 0.0722 | 0.9495 | 0.9560 | 0.9527 | 0.9885 |
| 0.0016 | 8.0 | 7024 | 0.0762 | 0.9491 | 0.9595 | 0.9543 | 0.9887 |
| 0.0018 | 9.0 | 7902 | 0.0769 | 0.9496 | 0.9542 | 0.9519 | 0.9883 |
| 0.0009 | 10.0 | 8780 | 0.0778 | 0.9505 | 0.9575 | 0.9540 | 0.9886 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "My name is Scott and I live in Columbus."}, {"text": "My name is Scott and I am calling from Buffalo, NY. I would like to file a complain with United Airlines."}, {"text": "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."}], "base_model": "bert-large-uncased", "model-index": [{"name": "bert-large-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9504719600222099, "name": "Precision"}, {"type": "recall", "value": 0.9574896520863632, "name": "Recall"}, {"type": "f1", "value": 0.9539679001337494, "name": "F1"}, {"type": "accuracy", "value": 0.9885618059637473, "name": "Accuracy"}]}]}]} | Jorgeutd/bert-large-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:bert-large-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | ## roberta-base
This model is a fine-tuned model that was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Multi Class Text Classification (emotion detection).
It achieves the following results on the evaluation set:
- Loss: 0.1613253802061081
- f1: 0.9413321705151999
## Hyperparameters
```json
{
"epochs": 10,
"train_batch_size": 16,
"learning_rate": 3e-5,
"weight_decay":0.01,
"load_best_model_at_end": true,
"model_name":"roberta-base",
"do_eval": True,
"load_best_model_at_end":True
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.941 |
| eval_f1 | 0.9413321705151999 |
| eval_loss | 0.1613253802061081|
| eval_recall | 0.941 |
| eval_precision | 0.9419519436781406 |
| {"language": "en", "license": "apache-2.0", "tags": ["sagemaker", "roberta-base", "text classification"], "datasets": ["emotion"], "widget": [{"text": "I am really upset that I have to call up to three times to the number on the back of my insurance card for my call to be answer"}], "model-index": [{"name": "sagemaker-roberta-base-emotion", "results": [{"task": {"type": "text-classification", "name": "Multi Class Text Classification"}, "dataset": {"name": "emotion", "type": "emotion"}, "metrics": [{"type": "accuracy", "value": 94.1, "name": "Validation Accuracy"}, {"type": "f1", "value": 94.13, "name": "Validation F1"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.931, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmM1ZmI0NjZhYjdlMWU4NWUwZmFjODFmMmM5MTlhMmEyMmQwOTk2NjQ5ZDNlYmFlMGEyMTY4Y2JiMTcwM2MwNiIsInZlcnNpb24iOjF9.haDbUk1y7nW1e_ext0s1xKefyOzep-XFa1HEkNQEcNV0cHCSRb-0YFakMf5Iee6q_EWFUS-QYxNkgEBlbw3fCQ"}, {"type": "precision", "value": 0.8833042147663716, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjZkOTQyMzkwYjE1ZWQ5YjJkMTEzNmIyZmFlMjkwY2YxNzA3OWE0ZDk5YjJlOWVhOTU5Nzc4ZTk5Mzg5NDcxOCIsInZlcnNpb24iOjF9._XhknNSsiailHiMr1SH9ki7SRswR_b-embALunoCjhBssh9WERkv0z1xpsbw7ORo0wx7WCslZRdJWaQoXOmgDQ"}, {"type": "precision", "value": 0.931, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGY0MTc0ZDBiYmZlYmFmMTcyYjk5MWM0MTRmYTlhY2U1ODY5NTQzNTQ5YjAzN2U0YjljNDAzZDQ5NDBkZDUwYyIsInZlcnNpb24iOjF9.313HYKetR4S4kjcMvEk9Yj2J-Ox8ZqvVk4FLrF6UmxlXYZ4F3put-89BEOxGl_ScugjjAWhKY1pHLPYpKz9PAA"}, {"type": "precision", "value": 0.9337002742192515, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjQ1ZDIzNmE3MjljMTk2NTBmNzcyMTEyOTUwZTljYTA2MjIwY2E4ZThkNGVjYjQwNzU3MTcxMzBiYzJkNWIzOSIsInZlcnNpb24iOjF9.6yXKQ9WS9AWdt1jxixtA5O2S1bcPTKQqIOw291Ytam8OI-zdTI2jwltT6JdU4lHdhTi5797zeNldJMCxGPR2DQ"}, {"type": "recall", "value": 0.9087144572668905, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzJhNTFmNGJkYTAxNzRiOWQ4YzQyMGY5NGQxMjBiMmRjZTA5OTM2ZjM0NWY0ZDJiOTIyODQzZTZkMzEzZmY4YSIsInZlcnNpb24iOjF9.Fy1gkGvRiyANGU6nYgc5QbhccqAfb4PjxEk1EkJAIAZJjs-f0hffwUDlJt_6gRY3KKnoU2kKg1XxpWjybRY7BQ"}, {"type": "recall", "value": 0.931, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTgwYWJmZDAzM2VkOGNjNjY3NjViOTFiMTYyZDc4ZDIzY2VhNTcwMDg3MjdiOTI4Nzc5ODI4N2ExYzY5ODAzMyIsInZlcnNpb24iOjF9.bEW-tZ-5JqkPDDfqkrdvzlzTGEJtYqRACZI1Jv7C8fWkJ8uJj0eQ8TDhcdGGDnFML-q1z3tnkO6PJuK9V2IxAg"}, {"type": "recall", "value": 0.931, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTM2ZDk4NDQ2YWIwM2VjNzUxZjQ0YzU4MzViZGMzYzA3YjlhMTI1NjQwOTM3M2U4NGJhNTMxYzllMjRkMzU2NSIsInZlcnNpb24iOjF9.k9yprOWEoB0-k306GyDGF-g4uw3kABLc8iE_3E5ZYfVbo9VHPo61GuSsWJyYJ7_aq6zWbzgfOFEwUeVjcmnaDA"}, {"type": "f1", "value": 0.8949974527433656, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODg0ZDllYWJkYWZkMWY2NjEzYWIxMWIwMWUyZDhmNWEzM2FmN2E0MWEwOTIyMTM2YTI1MDdmYmRmZWQ5ZmVmNCIsInZlcnNpb24iOjF9.DUD3dfb4vRu-Z9YxvDErJaPLuZIEDBNsdqzkf4ee6dkOCOnYtUhGAybnxtGN1xSYsynXYhU-ymCajWcrVKUCAA"}, {"type": "f1", "value": 0.931, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGU0MTYyOTNjOTBmNzAxNjVlZmQxYmRkMmE5MWY2NzhlNjg0ZGZkMmNmZmI3Zjk1NjJlYTdjMGRhMDMwYzAzNCIsInZlcnNpb24iOjF9.h0wCmhwRT4qRZJcc2zGP3T7dF0_wKdKzTtSVoVWFOUzQZ3RoeY2Hfjl3XA7yyw9KnoDWnLiW8DU_5kOBX-peCQ"}, {"type": "f1", "value": 0.9318434300647934, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmU4OGY4M2NkYWExNjI3Yjk0YmYzNWJjZGQ5ZGNmYzc4ZDk4YzRmZDRiNmRkN2VlNDZhOGIwZDc3MzcxYjVlYiIsInZlcnNpb24iOjF9.qhwi7AV-7NSm1yVd8v1Ea3nTRAFXfqLMwUJ5PUbPSa11jJ0tZNOQVDXHMAD8fVmoueLgZNRUpPVIB881Sq3EBg"}, {"type": "loss", "value": 0.17379647493362427, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDdjODE2MjA5ODg2MmM2OWJmMjMzMzUzNGU1ZDc5NjRkNGU4N2VmNmM2NWE0YTEyYWMxNGUzN2M3YTkxNzUyMCIsInZlcnNpb24iOjF9.qcQWfHuRnfiluicR7gke3vm9u701hB4Bp0YaX2opaxL6d5DRCzuqAg-2kdmhhOL-8DW5JhY6gTrF14AEuEE9Cw"}]}]}]} | Jorgeutd/sagemaker-roberta-base-emotion | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"sagemaker",
"roberta-base",
"text classification",
"en",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 10.617130949793383
si_sdr_imp: 12.551811412989263
sdr: 11.231867464482065
sdr_imp: 13.059765009747343
sir: 24.461138352988346
sir_imp: 24.371856452307703
sar: 11.5649982725426
sar_imp: 4.662525705768228
stoi: 0.8701085138712695
stoi_imp: 0.2245418019822898
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri3Mix_sepclean_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri3Mix_sepclean_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DCCRNet", "audio-to-audio", "speech-enhancement"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DCCRNet",
"audio-to-audio",
"speech-enhancement",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DCUNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DCUNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DCUNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 1
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
masknet:
bidirectional: true
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 1
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.7228101708889
si_sdr_imp: 11.2730288650292
sdr: 15.35661405197161
sdr_imp: 11.853951252758595
sir: Infinity
sir_imp: NaN
sar: 15.35661405197161
sar_imp: 11.853951252758595
stoi: 0.9300461826351578
stoi_imp: 0.13412635909461715
```
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DPRNNTasNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DPTNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.829670037349064
si_sdr_imp: 11.379888731489366
sdr: 15.395712644737149
sdr_imp: 11.893049845524112
sir: Infinity
sir_imp: NaN
sar: 15.395712644737149
sar_imp: 11.893049845524112
stoi: 0.9301948391058859
stoi_imp: 0.13427501556534832
```
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DPTNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DPTNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DPTNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JorisCos/FasNet | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | asteroid |
## Asteroid model `JorisCos/VAD_Net`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
segment: 3
train_dir: /home/jcosentino/VAD_dataset/metadata/sets/train.json
valid_dir: /home/jcosentino/VAD_dataset/metadata/sets/dev.json
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/full_not_causal_f1/
help: null
masknet:
bn_chan: 128
causal: false
hid_chan: 512
mask_act: relu
n_blocks: 3
n_repeats: 5
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On LibriVAD min test set :
```yml
accuracy: 0.8196149023502931,
precision: 0.8305009048356607,
recall: 0.8869202491310206,
f1_score: 0.8426184545700124
```
License notice:
This work "VAD_Net" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The [DNS challenge](https://github.com/microsoft/DNS-Challenge) noises, [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
"VAD_Net" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "VADNet", "VAD", "Voice Activity Detection"], "datasets": ["LibriVAD"]} | JorisCos/VAD_Net | null | [
"asteroid",
"pytorch",
"audio",
"VADNet",
"VAD",
"Voice Activity Detection",
"dataset:LibriVAD",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JosAbc123/Loken | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JoseRPrietoF/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JosepRC/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Youfeng/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JoshObi94/GPT-Neo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JoshuaGhost/counter_assist | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Josiah/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | # BART_Finetuned_CNN_dailymail
The following repo contains a [bart-base](https://huggingface.co/facebook/bart-base) model that was finetuned using the dataset [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) | {} | Josmar/BART_Finetuned_CNN_dailymail | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jour/Translation-Test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-fr
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "model-index": [{"name": "m2m100_418M-fr", "results": []}]} | Jour/m2m100_418M-fr | null | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jour/marian-finetuned-kde4-en-to-fr-accelerate | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Jour/marian-finetuned-kde4-en-to-fr | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | JovenPai/bert_cn_finetunning | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | JovenPai/bert_finetunning_test | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Jtisch7/bertFinancialSent | null | [
"transformers",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Juani/Matemags | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Julhialinda/Julhia | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Juliana/Jujubinha | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Morty DialoGPT Model | {"tags": ["conversational"]} | Julianqll/DialoGPT-small-finalmorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Julianqll/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Juliet/Teste | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | ## Model description
This model was trained on the XED dataset and achieved
validation loss: 0.5995
validation acc: 84.28% (ROC-AUC)
Labels are based on Plutchik's model of emotions and may be combined:

### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.8.0
- Tokenizers 0.10.3
| {} | JuliusAlphonso/dear-jarvis-monolith-xed-en | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dear-jarvis-v5
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 470 | 0.3106 |
| 0.3452 | 2.0 | 940 | 0.3064 |
| 0.2692 | 3.0 | 1410 | 0.3148 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "datasets": [], "model_index": [{"name": "dear-jarvis-v5", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | JuliusAlphonso/dear-jarvis-v5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | Labels are based on Plutchik's model of emotions and may be combined:
 | {} | JuliusAlphonso/distilbert-plutchik | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Junaid/URDU | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Jung/t5-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Jung/t5-large-finetuned | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Jung/t5-large | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5327 | 0.4248 |
| 0.347 | 2.0 | 1070 | 0.5105 | 0.5239 |
| 0.2344 | 3.0 | 1605 | 0.6639 | 0.5224 |
| 0.1672 | 4.0 | 2140 | 0.7470 | 0.5414 |
| 0.1228 | 5.0 | 2675 | 0.8352 | 0.5377 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.541356878970505, "name": "Matthews Correlation"}]}]}]} | Jungwoo/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jungwoo/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Junjun/JUNJUN | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Junmai/klue-roberta-large-boolq-finetuned-v1 | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
multiple-choice | transformers | {} | Junmai/klue-roberta-large-copa-finetuned-v1 | null | [
"transformers",
"pytorch",
"roberta",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
multiple-choice | transformers | {} | Junmai/pretrained-klue-roberta-v1 | null | [
"transformers",
"pytorch",
"roberta",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Junxia/negCue | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | asteroid | ## Asteroid model
## Description:
- Code: The code corresponding to this pretrained model can be found [here](https://github.com/asteroid-team/asteroid/tree/master/egs/wsj0-mix-var/Multi-Decoder-DPRNN).
- Notebook: Colab Notebook with examples can be found [here](https://colab.research.google.com/drive/11MGx3_sgOrQrB6k8edyAvg5mGIxqR5ED?usp=sharing)
- [Paper](http://www.isle.illinois.edu/speech_web_lg/pubs/2021/zhu2021multi.pdf): "Multi-Decoder DPRNN: High Accuracy Source Counting and Separation", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021).
- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.
- [Project Page](https://junzhejosephzhu.github.io/Multi-Decoder-DPRNN/)
- [Original research repo](https://github.com/JunzheJosephZhu/MultiDecoder-DPRNN)
This model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid.
It was trained on the `sep_count` task of the Wsj0MixVar dataset.
## Training config:
```yaml
filterbank:
n_filters: 64
kernel_size: 8
stride: 4
masknet:
n_srcs: [2, 3, 4, 5]
bn_chan: 128
hid_size: 128
chunk_size: 128
hop_size: 64
n_repeats: 8
mask_act: 'sigmoid'
bidirectional: true
dropout: 0
use_mulcat: false
training:
epochs: 200
batch_size: 2
num_workers: 2
half_lr: yes
lr_decay: yes
early_stop: yes
gradient_clipping: 5
optim:
optimizer: adam
lr: 0.001
weight_decay: 0.00000
data:
train_dir: "data/{}speakers/wav8k/min/tr"
valid_dir: "data/{}speakers/wav8k/min/cv"
task: sep_count
sample_rate: 8000
seglen: 4.0
minlen: 2.0
loss:
lambda: 0.05
```
## Results:
```yaml
'Accuracy': 0.9723333333333334, 'P-Si-SNR': 10.36027378628496
```
### License notice:
This work "MultiDecoderDPRNN" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"MultiDecoderDPRNN" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Joseph Zhu.
| {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "MultiDecoderDPRNN"], "datasets": ["Wsj0MixVar", "sep_clean"]} | JunzheJosephZhu/MultiDecoderDPRNN | null | [
"asteroid",
"pytorch",
"audio",
"MultiDecoderDPRNN",
"dataset:Wsj0MixVar",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Jurgen/RALFY | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29016523
- CO2 Emissions (in grams): 3.273303707756322
## Validation Metrics
- Loss: 0.6093757748603821
- Accuracy: 0.8333333333333334
- Macro F1: 0.7937936978656889
- Micro F1: 0.8333333333333334
- Weighted F1: 0.8239843785760546
- Macro Precision: 0.8988882462566673
- Micro Precision: 0.8333333333333334
- Weighted Precision: 0.8404982541824647
- Macro Recall: 0.7805142534864643
- Micro Recall: 0.8333333333333334
- Weighted Recall: 0.8333333333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jush/autonlp-bp-29016523
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Jush/autonlp-data-bp"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 3.273303707756322} | JushBJJ/autonlp-bp-29016523 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Jush/autonlp-data-bp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | JustMuteAll/Riddle_man | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | JustMuteAll/bert-base-uncased-finetuned-swag | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | FidicBERT is a pre-trained language model to analyze legal text. It is built by further training the Roberta language model in the legal domain, using an extensive legal and contract corpus and thereby fine-tuning for classifying and clustering contractual documents.
| {} | Jzz/FidicBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
This model is finetuned from [mt5-base](https://huggingface.co/google/mt5-base).
The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found [here](https://gist.github.com/K024/4a100a0f4f4b07208958e0f3244da6ad).
Usage:
```python
from transformers import (
T5Tokenizer,
MT5ForConditionalGeneration,
Text2TextGenerationPipeline,
)
path = "K024/mt5-zh-ja-en-trimmed"
pipe = Text2TextGenerationPipeline(
model=MT5ForConditionalGeneration.from_pretrained(path),
tokenizer=T5Tokenizer.from_pretrained(path),
)
sentence = "ja2zh: 吾輩は猫である。名前はまだ無い。"
res = pipe(sentence, max_length=100, num_beams=4)
res[0]['generated_text']
```
Training data:
```
wikimedia-en-ja
wikimedia-en-zh
wikimedia-ja-zh
wikititles-ja-en
wikititles-zh-en
wikimatrix-ja-zh
news-commentary-en-ja
news-commentary-en-zh
news-commentary-ja-zh
ted2020-en-ja
ted2020-en-zh
ted2020-ja-zh
```
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
| {"language": ["zh", "ja", "en"], "license": "cc-by-nc-sa-4.0", "tags": ["translation"], "widget": [{"text": "ja2zh: \u543e\u8f29\u306f\u732b\u3067\u3042\u308b\u3002\u540d\u524d\u306f\u307e\u3060\u7121\u3044\u3002"}]} | K024/mt5-zh-ja-en-trimmed | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"zh",
"ja",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | yes | {} | K3LLiN/Kellin | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.