modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
tags:
- generated_from_trainer
datasets:
- fdner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-finetuned-ner-v1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fdner
type: fdner
args: fdner
metrics:
- name: Precision
type: precision
value: 0.981203007518797
- name: Recall
type: recall
value: 0.9886363636363636
- name: F1
type: f1
value: 0.9849056603773584
- name: Accuracy
type: accuracy
value: 0.9909536373916321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner-v1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the fdner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0413
- Precision: 0.9812
- Recall: 0.9886
- F1: 0.9849
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 8 | 2.0640 | 0.0 | 0.0 | 0.0 | 0.4323 |
| No log | 2.0 | 16 | 1.7416 | 0.0204 | 0.0227 | 0.0215 | 0.5123 |
| No log | 3.0 | 24 | 1.5228 | 0.0306 | 0.0265 | 0.0284 | 0.5456 |
| No log | 4.0 | 32 | 1.2597 | 0.0961 | 0.1591 | 0.1198 | 0.6491 |
| No log | 5.0 | 40 | 1.0273 | 0.1588 | 0.2159 | 0.1830 | 0.7450 |
| No log | 6.0 | 48 | 0.8026 | 0.2713 | 0.3258 | 0.2960 | 0.8208 |
| No log | 7.0 | 56 | 0.6547 | 0.36 | 0.4091 | 0.3830 | 0.8513 |
| No log | 8.0 | 64 | 0.5180 | 0.4650 | 0.5038 | 0.4836 | 0.8873 |
| No log | 9.0 | 72 | 0.4318 | 0.5139 | 0.5606 | 0.5362 | 0.9067 |
| No log | 10.0 | 80 | 0.3511 | 0.6169 | 0.6894 | 0.6512 | 0.9291 |
| No log | 11.0 | 88 | 0.2887 | 0.6691 | 0.6894 | 0.6791 | 0.9414 |
| No log | 12.0 | 96 | 0.2396 | 0.7042 | 0.7576 | 0.7299 | 0.9516 |
| No log | 13.0 | 104 | 0.2052 | 0.7568 | 0.8371 | 0.7950 | 0.9587 |
| No log | 14.0 | 112 | 0.1751 | 0.8303 | 0.8712 | 0.8503 | 0.9610 |
| No log | 15.0 | 120 | 0.1512 | 0.8464 | 0.8977 | 0.8713 | 0.9668 |
| No log | 16.0 | 128 | 0.1338 | 0.8759 | 0.9091 | 0.8922 | 0.9710 |
| No log | 17.0 | 136 | 0.1147 | 0.8959 | 0.9129 | 0.9043 | 0.9746 |
| No log | 18.0 | 144 | 0.1011 | 0.9326 | 0.9432 | 0.9379 | 0.9761 |
| No log | 19.0 | 152 | 0.0902 | 0.9251 | 0.9356 | 0.9303 | 0.9795 |
| No log | 20.0 | 160 | 0.0806 | 0.9440 | 0.9583 | 0.9511 | 0.9804 |
| No log | 21.0 | 168 | 0.0743 | 0.9586 | 0.9659 | 0.9623 | 0.9812 |
| No log | 22.0 | 176 | 0.0649 | 0.9511 | 0.9583 | 0.9547 | 0.9851 |
| No log | 23.0 | 184 | 0.0595 | 0.9591 | 0.9773 | 0.9681 | 0.9876 |
| No log | 24.0 | 192 | 0.0537 | 0.9625 | 0.9735 | 0.9680 | 0.9883 |
| No log | 25.0 | 200 | 0.0505 | 0.9701 | 0.9848 | 0.9774 | 0.9894 |
| No log | 26.0 | 208 | 0.0464 | 0.9737 | 0.9811 | 0.9774 | 0.9904 |
| No log | 27.0 | 216 | 0.0439 | 0.9737 | 0.9811 | 0.9774 | 0.9906 |
| No log | 28.0 | 224 | 0.0428 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 29.0 | 232 | 0.0417 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 30.0 | 240 | 0.0413 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9208715596330275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5576 | 1.0 | 264 | 0.2690 | 0.8979 |
| 0.2854 | 2.0 | 528 | 0.2077 | 0.9117 |
| 0.2158 | 3.0 | 792 | 0.2195 | 0.9209 |
| 0.1789 | 4.0 | 1056 | 0.2260 | 0.9163 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language:
- ko
tags:
- KoBART
- BART
- Korean
- QG
- Question
- KorQuad
license: gpl
datasets:
- KorQuad 1.0
widget:
- text: "ν€μλ μΆμΆ: 5<unused1>1943λ
10μ λΉμ, λ°μλ‘ Bλ μ΄κΈ° κ°λμμ 250 MWμ μ λ ₯μ μμ°νλλ‘ μ€κ³λμλ€. 맨ν΄νΌ κ³νμ λ°μλ‘μ Aμμ FκΉμ§ μΌλ ¨λ²νΈλ₯Ό λΆμ¬νμλ€. μ΄ λ°μλ‘λ€μ λͺ¨λ ν μ₯μμ μ§μ΄μ‘λ€. λ°μλ‘μ 건μ€μλ 390 ν€μ κ°μ² μ΄ μμλμμΌλ©°, 13,300 mμ λ¬νλ 5λ§κ°μ μ½ν¬λ¦¬νΈ λ²½λμ μ¬μ©νμ¬ λμ΄ 37mμ λ¬νλ 건물μ 건μΆνμλ€. λ°μλ‘λ 1944λ
2μμ 착곡λμλ€. 1944λ
9μ 13μΌ μΊνν΄, λ§ν°μ΄μ€, λνμ¬μ ν¬λΌμ°ν¬λ κ·Έλ¦°μνΈμ λ μ€λ μ°μ¦, κ·Έλ¦¬κ³ μλ¦¬μ½ νλ₯΄λ―Έκ° μ§μΌλ³΄λ κ°μ΄λ° λ°μλ‘κ° κ°λλμλ€. λ°μλ‘μ μ°λ£λ νλ₯΄λ―Έκ° μ§μ μ§μ΄λ£μλ€. κ°λ μ΄κΈ° λ°μλ‘λ μ‘°μ κ°κ³Ό λκ°μ λ±μ λ¬Έμ κ° μμ΄ κ°λκ³Ό μ μ§λ₯Ό λ°λ³΅νμλ€."
example_title: "ν€μλ μΆμΆ Keyword Extraction"
- text: "μ§λ¬Έ μμ±: κ±°μ‘±μ μΈ μ ν<unused0>μμ§μλμ 1592λ
λΆν° 1598λ
κΉμ§ 2μ°¨μ κ±Έμ³μ μ°λ¦¬λλΌμ μΉ¨μ
ν μΌλ³Έκ³Όμ μΈμμ΄λ€. μμ²λ μλ ¨μ κ²ͺμΌλ©΄μλ λμ§κΈ΄ μ νμΌλ‘ μ΄κ²¨λ΄κ³ κ°μ±κ³Ό μκΈ°μ±μ°°μ λ°νμΌλ‘ λ―Όμ‘±μ μ΄λͺ
μ μλ‘ κ°μ²ν΄λκ° κ³κΈ°κ° λ μ μμ΄λ€. λͺ
μ μμ‘°λ μμμ§λ§ μΉλ¦¬μ κ°μ₯ ν° μλλ ₯μ κ±°μ‘±μ μΈ μ νμΌλ‘, μ΄μμ μ μν μ ν΄κΆμ μ₯μ
κ³Ό μ κ΅μμ λ΄κΈ°ν μλ³μ νλμ λΆλ¦¬νλ μ μ κ΅λ©΄μ μ νμν¨ κ²°μ μ μΈ νμ΄μλ€. μ΄ μ λμ λμμμμ κ΅μ μ μΈλ₯Ό ν¬κ² λ³νμν€λ κ²°κ³Όλ₯Ό κ°μ Έμ, λͺ
κ³Ό μ²μ΄ κ΅μ²΄λλ©΄μ λ³μνΈλμ΄λΌλ μλ ¨μ μκ³ νκΈ°λ νλ€. μ‘°μ μ΄ μμ§μλμ λΉνμ¬ μ μ μ΄κΈ° μ΄λ₯Ό κ°λΉνκΈ° μ΄λ €μΈ μ λλ‘ κ΅λ ₯μ΄ μ μ½ν΄μ§ κ²μ μλμ΄ μΌμ΄λ μ μ‘°λμ μ΄λ₯΄λ¬μ λΉλ‘―λ κ²μ μλμλ€. μ΄λ―Έ ν¨μ¬ μ΄μ λΆν° μ€μ μ κΈ°μ΄μ΄ λνλκΈ° μμνμλ€.μ μΉμ μΌλ‘λ μ°μ°κ΅° μ΄ν λͺ
μ’
λμ μ΄λ₯΄λ 4λ μ¬νμ νꡬ·μ¬λ¦Ό μΈλ ₯κ°μ κ³μλ μ μμΌλ‘ μΈν μ€μ μ κ³μ νΌλ, μ¬λ¦Ό μΈλ ₯μ΄ λμΈν μ μ‘° μ¦μ μ΄ν 격νλ λΉμ λ±μΌλ‘ μ μΉμ μ μμ μΈ μ΄μμ μννκΈ° μ΄λ €μ΄ μ§κ²½μ΄μλ€.κ΅°μ¬μ μΌλ‘λ μ‘°μ μ΄κΈ°μ μ€μΉλ κ΅λ°©μ²΄μ κ° λΆκ΄΄λμ΄ μΈμΉ¨μ λλΉνκΈ° μν λ°©μ±
μΌλ‘ κ΅°κ΅κΈ°λ¬΄λ₯Ό μ₯μ
νλ λΉλ³μ¬λΌλ ν©μ κΈ°κ΄μ μ€μΉνμΌλ, μ΄κ² λν μ μμ μΈ κΈ°λ₯μ λ°ννμ§ λͺ»νμλ€.μ΄μ΄λ λ¨μλΆνΈμ μΉ¨μ
μ λμ²νκΈ° μνμ¬ μλ§μλ³μ€μ μ£Όμ₯νκΈ°λ νμλ€. κ·Έλ¬λ κ΅κ° μ¬μ μ νμ½μΌλ‘ λ»μ μ΄λ£¨μ§ λͺ»νκ³ , μ¬νλ μ μ ν΄μ΄ν΄μ§κ³ λ¬Έμ½μ λΉ μ Έ κ·Όλ³Έμ μΈ κ΅κ° λ°©μ±
μ΄ ν립λμ§ λͺ»ν μ€μ μ΄μλ€.μ΄λ¬ν μ¦μ μΌλ³Έμμλ μλ‘μ΄ νμΈκ° μ κ°λκ³ μμλ€. μ¦, 15μΈκΈ° νλ° μμΈλμ μ λ°λΌ μΌλ³Έμλ μ λ½ μμΈλ€μ΄ λ€μ΄μ μ ν₯ μμ
λμκ° λ°μ λμ΄ μ’
λμ λ΄κ±΄μ μΈ μ§λ°° ννκ° μνλ°κΈ° μμνμλ€."
example_title: "μ§λ¬Έ μμ± Question Generation"
---
# KoBARTλ₯Ό νμ©ν μ§λ¬Έ μμ± κ΄λ ¨ Multitasking
Based on [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2). You can see the notebook on [Kaggle](https://www.kaggle.com/rycont/koquestionbart)
νκ΅μ΄ λ¬Έλ¨μμ μλ―Έμλ μ§λ¬Έμ μμ±νκΈ° μν΄ λ€μκ³Ό κ°μ νμ€ν¬λ₯Ό λ©ν°νμ€ν¬λ‘ νμ΅ν λͺ¨λΈμ
λλ€.
- λ¬Έλ¨μμ λ΅λ³μ΄ λ μ μλ ν€μλ μΆμΆ
- ν€μλλ₯Ό λ΅λ³μΌλ‘ ν μ μλ λ¬Έμ₯ μμ±
## μ¬μ© λ°©λ²
### ν€μλ μΆμΆ
**μ
λ ₯**
> [ν€μλ κ°―μ]\<unused1>[λ¬Έλ¨]
**μΆλ ₯**
> [ν€μλ1]\<unused2>[ν€μλ1]\<unused2>[ν€μλn...
### μ§λ¬Έ μμ±
**μ
λ ₯**
> [λ΅λ³]\<unused0>[λ¬Έλ¨]
**μΆλ ₯**
> [μ§λ¬Έ λ¬Έμ₯] |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-ar
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0529
- F1: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4605 | 1.0 | 74 | 0.1084 | 0.9609 |
| 0.1105 | 2.0 | 148 | 0.0563 | 0.9809 |
| 0.0696 | 3.0 | 222 | 0.0500 | 0.9851 |
| 0.0512 | 4.0 | 296 | 0.0529 | 0.9856 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H384-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9197247706422018
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5787 | 1.0 | 264 | 0.3496 | 0.8624 |
| 0.3413 | 2.0 | 528 | 0.2599 | 0.8991 |
| 0.2716 | 3.0 | 792 | 0.2651 | 0.9048 |
| 0.2343 | 4.0 | 1056 | 0.2532 | 0.9197 |
| 0.2165 | 5.0 | 1320 | 0.2636 | 0.9151 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H768-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9426605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H768-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2013
- Accuracy: 0.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4734 | 1.0 | 264 | 0.2046 | 0.9243 |
| 0.2399 | 2.0 | 528 | 0.1912 | 0.9346 |
| 0.1791 | 3.0 | 792 | 0.1943 | 0.9335 |
| 0.1442 | 4.0 | 1056 | 0.2103 | 0.9369 |
| 0.1217 | 5.0 | 1320 | 0.2013 | 0.9427 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/specter-bert-model | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-gk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-gk
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- F1: 0.9536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.292 | 1.0 | 258 | 0.1525 | 0.9565 |
| 0.1231 | 2.0 | 516 | 0.1348 | 0.9619 |
| 0.0787 | 3.0 | 774 | 0.1408 | 0.9607 |
| 0.0655 | 4.0 | 1032 | 0.1505 | 0.9536 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/specter-bert-model_copy | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-all
results: []
widget:
- text: "1 sheet of frozen puff pastry (thawed)"
- text: "1/2 teaspoon fresh thyme, minced"
- text: "2-3 medium tomatoes"
- text: "1 petit oignon rouge"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the recipe ingredient [NER dataset](https://github.com/cosylabiiit/recipe-knowledge-mining) from the paper [A Named Entity Based Approach to Model Recipes](https://arxiv.org/abs/2004.12184) (using both the `gk` and `ar` datasets).
It achieves the following results on the evaluation set:
- Loss: 0.1169
- F1: 0.9672
On the test set it obtains an F1 of 0.9615, slightly above the CRF used in the paper.
## Model description
Predicts tag of each token in an ingredient string.
| Tag | Significance | Example |
| --- | --- | --- |
| NAME | Name of Ingredient | salt, pepper |
| STATE | Processing State of Ingredient. | ground, thawed |
| UNIT | Measuring unit(s). | gram, cup |
| QUANTITY | Quantity associated with the unit(s). | 1, 1 1/2 , 2-4 |
| SIZE | Portion sizes mentioned. | small, large |
| TEMP | Temperature applied prior to cooking. | hot, frozen |
| DF (DRY/FRESH) | Fresh otherwise as mentioned. | dry, fresh |
## Intended uses & limitations
* Only trained on ingredient strings.
* Tags subtokens; tag should be propagated to whole word
* Works best with pre-tokenisation splitting of symbols (such as parentheses) and numbers (e.g. 50g -> 50 g)
* Typically only detects the first ingredient if there are multiple.
* Only trained on two American English data sources
* Tags TEMP and DF have very few training data.
## Training and evaluation data
Both the `ar` (AllRecipes.com) and `gk` (FOOD.com) datasets obtained from the TSVs from the authors' [repository](https://github.com/cosylabiiit/recipe-knowledge-mining).
## Training procedure
It follows the overall procedure from Chapter 4 of [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/) by Tunstall, von Wera and Wolf.
See the [training notebook](https://github.com/EdwardJRoss/nlp_transformers_exercises/blob/master/notebooks/ch4-ner-recipe-stanford-crf.ipynb) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2529 | 1.0 | 331 | 0.1303 | 0.9592 |
| 0.1164 | 2.0 | 662 | 0.1224 | 0.9640 |
| 0.0904 | 3.0 | 993 | 0.1156 | 0.9671 |
| 0.0585 | 4.0 | 1324 | 0.1169 | 0.9672 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnonymousSub/specter-bert-model_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: afl-3.0
---
## Description
SqueezeNet from PyTorch-zoo, pretrained with ImageNet and fine-tuned with scenic dataset from kaggle https://www.kaggle.com/datasets/arnaud58/landscape-pictures
## Results
Trained with 8K samples, tested with 120++ non-overlapping samples.
Accuracy: 0.978261
f1-score: 0.978417
|
AnonymousSub/unsup-consert-base | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | python run_squad.py \
--model_name_or_path google/canine-c \
--do_train \
--do_eval \
--per_gpu_train_batch_size 1 \
--per_gpu_eval_batch_size 1 \
--gradient_accumulation_steps 128 \
--learning_rate 3e-5 \
--num_train_epochs 3 \
--max_seq_length 1024 \
--doc_stride 128 \
--max_answer_length 240 \
--output_dir canine-c-squad \
--model_type bert
{
"_name_or_path": "google/canine-c",
"architectures": [
"CanineForQuestionAnswering"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 57344,
"downsampling_rate": 4,
"eos_token_id": 57345,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"local_transformer_stride": 128,
"max_position_embeddings": 16384,
"model_type": "canine",
"num_attention_heads": 12,
"num_hash_buckets": 16384,
"num_hash_functions": 8,
"num_hidden_layers": 12,
"pad_token_id": 0,
"torch_dtype": "float32",
"transformers_version": "4.19.0.dev0",
"type_vocab_size": 16,
"upsampling_kernel_size": 4,
"use_cache": true
}
{'exact': 58.893093661305585, 'f1': 72.18823344945899} |
AnonymousSub/unsup-consert-base_copy | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-04-08T14:33:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_finetuning-sentiment-movie-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_finetuning-sentiment-movie-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
- Accuracy: 0.9578
- F1: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/unsup-consert-base_copy_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2022-04-08T14:35:45Z | # 1. Deep Learning for Vision
</p>
Upside down detector: Train a model to detect if images are upside down
* Pick a dataset of natural images (we suggest looking at datasets on the Hugging Face Hub)
* Synthetically turn some of the images upside down. Create a training and test set.
* Build a neural network (using TensorFlow, PyTorch, or any framework you like)
* Train it to classify image orientation until a reasonable accuracy is reached
* Upload the model to the Hugging Face Hub, and add a link to your model below.
* Look at some of the images that were classified incorrectly. Please explain what you might do to improve your model performance on these images in the future (you do not need to implement these suggestions) |
AnonymousSub/unsup-consert-emanuals | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: FakevsRealNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FakevsRealNews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.0554 | 1.0 | 1956 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0006 | 2.0 | 3912 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 3.0 | 5868 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/unsup-consert-papers | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-04-08T14:51:17Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
AnonymousSubmission/pretrained-model-1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-08T15:14:31Z | ---
language: en
thumbnail: http://www.huggingtweets.com/lilpeeplyric/1649430909105/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445263525878902787/yW8p2-e__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lil peep lyrics bot</div>
<div style="text-align: center; font-size: 14px;">@lilpeeplyric</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lil peep lyrics bot.
| Data | lil peep lyrics bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jgq3lf6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lilpeeplyric's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lbjza1d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lbjza1d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lilpeeplyric')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Anonymreign/savagebeta | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: avialfont/dummy-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# avialfont/dummy-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.6755
- Validation Loss: 3.8033
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3627, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2942 | 4.4915 | 0 |
| 6.2878 | 3.9207 | 1 |
| 5.6755 | 3.8033 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Anthos23/FS-distilroberta-fine-tuned | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"has_space"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: "cc-by-nc-4.0"
tags:
- code
- python
- javascript
---
# InCoder 1B
A 1B parameter decoder-only Transformer model trained on code using a causal-masked objective, which allows inserting/infilling code as well as standard left-to-right generation.
The model was trained on public open-source repositories with a permissive, non-copyleft, license (Apache 2.0, MIT, BSD-2 or BSD-3) from GitHub and GitLab, as well as StackOverflow. Repositories primarily contained Python and JavaScript, but also include code from 28 languages, as well as StackOverflow.
For more information, see our:
- [Demo](https://huggingface.co/spaces/facebook/incoder-demo)
- [Project site](https://sites.google.com/view/incoder-code-models)
- [Examples](https://sites.google.com/view/incoder-code-models/home/examples)
- [Paper](https://arxiv.org/abs/2204.05999)
A larger, 6B, parameter model is also available at [facebook/incoder-6B](https://huggingface.co/facebook/incoder-6B).
## Requirements
`pytorch`, `tokenizers`, and `transformers`. Our model requires HF's tokenizers >= 0.12.1, due to changes in the pretokenizer.
```
pip install torch
pip install "tokenizers>=0.12.1"
pip install transformers
```
## Usage
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
### Model
`model = AutoModelForCausalLM.from_pretrained("facebook/incoder-1B")`
### Tokenizer
`tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-1B")`
(Note: the incoder-1B and incoder-6B tokenizers are identical, so 'facebook/incoder-6B' could also be used.)
When calling `tokenizer.decode`, it's important to pass `clean_up_tokenization_spaces=False` to avoid removing spaces after punctuation. For example:
`tokenizer.decode(tokenizer.encode("from ."), clean_up_tokenization_spaces=False)`
(Note: encoding prepends the `<|endoftext|>` token, as this marks the start of a document to our model. This token can be removed from the decoded output by passing `skip_special_tokens=True` to `tokenizer.decode`.)
## License
CC-BY-NC 4.0
## Credits
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
Thanks to Lucile Saulnier, Leandro von Werra, Nicolas Patry, Suraj Patil, Omar Sanseviero, and others at HuggingFace for help with the model release, and to Naman Goyal and Stephen Roller for the code our demo was based on! |
Anthos23/my-awesome-model | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: augmented_Squad_Translated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# augmented_Squad_Translated
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1154 | 1.0 | 10835 | 0.5251 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Anthos23/test_trainer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-08T16:11:26Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1317183233495388160/nLbBT6WF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">3bkreno</div>
<div style="text-align: center; font-size: 14px;">@notsorob</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 3bkreno.
| Data | 3bkreno |
| --- | --- |
| Tweets downloaded | 26419 |
| Retweets | 111 |
| Short tweets | -8796 |
| Tweets kept | 8796 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1l7p1yze/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notsorob's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ypaq5o5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ypaq5o5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notsorob')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Anubhav23/indianlegal | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-08T16:38:22Z | ---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) trained for Question Generation in an unsupervised manner using [Back-Training](https://arxiv.org/pdf/2104.08801.pdf) algorithm (Kulshreshtha et al, EMNLP 2021). The dataset used are unaligned questions and passages from [MLQuestions dataset](https://github.com/McGill-NLP/MLQuestions/tree/main/data).
## Details of Back-Training
The Back-Training algorithm was presented in [Back-Training excels Self-Training at Unsupervised Domain Adaptation
of Question Generation and Passage Retrieval](https://arxiv.org/pdf/2104.08801.pdf) by *Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy* in Here the abstract:
In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA) from source to target domain. While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between the target domain and synthetic data distribution, and reduces model overfitting to the source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU4 points on generation, and 17.6% top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation datasetMLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.
## Model training ποΈβ
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/UDA-BackTraining.sh)
## Model in Action π
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
#Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("geekydevu/bart-qg-mlquestions-backtraining")
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("geekydevu/bart-qg-mlquestions-backtraining")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
Anubhav23/model_name | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-airplanes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-airplanes
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0152
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0165 | 2.38 | 100 | 0.0152 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gaurishhs/API | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
``` |
Apisate/DialoGPT-small-jordan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: TestMeanFraction2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestMeanFraction2
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3967
- Matthews Correlation: 0.2537
## Model description
More information needed
## Intended uses & limitations
"La panique totale" Cette femme trouve une Γ©norme araignΓ©e suspendue Γ sa douche.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 0.13 | 50 | 1.1126 | 0.1589 |
| No log | 0.25 | 100 | 1.0540 | 0.1884 |
| No log | 0.38 | 150 | 1.1533 | 0.0818 |
| No log | 0.51 | 200 | 1.0676 | 0.1586 |
| No log | 0.64 | 250 | 0.9949 | 0.2280 |
| No log | 0.76 | 300 | 1.0343 | 0.2629 |
| No log | 0.89 | 350 | 1.0203 | 0.2478 |
| No log | 1.02 | 400 | 1.0041 | 0.2752 |
| No log | 1.15 | 450 | 1.0808 | 0.2256 |
| 1.023 | 1.27 | 500 | 1.0029 | 0.2532 |
| 1.023 | 1.4 | 550 | 1.0204 | 0.2508 |
| 1.023 | 1.53 | 600 | 1.1377 | 0.1689 |
| 1.023 | 1.65 | 650 | 1.0499 | 0.2926 |
| 1.023 | 1.78 | 700 | 1.0441 | 0.2474 |
| 1.023 | 1.91 | 750 | 1.0279 | 0.2611 |
| 1.023 | 2.04 | 800 | 1.1511 | 0.2804 |
| 1.023 | 2.16 | 850 | 1.2381 | 0.2512 |
| 1.023 | 2.29 | 900 | 1.3340 | 0.2385 |
| 1.023 | 2.42 | 950 | 1.4372 | 0.2842 |
| 0.7325 | 2.54 | 1000 | 1.3967 | 0.2537 |
| 0.7325 | 2.67 | 1050 | 1.4272 | 0.2624 |
| 0.7325 | 2.8 | 1100 | 1.3869 | 0.1941 |
| 0.7325 | 2.93 | 1150 | 1.4983 | 0.2063 |
| 0.7325 | 3.05 | 1200 | 1.4959 | 0.2409 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+0aef44c
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Aplinxy9plin/toxic-detection-rus | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-gpt-small-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-10epoch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.29 | 0.94 | 1000 | 2.8452 |
| 2.3155 | 1.88 | 2000 | 2.3659 |
| 1.8817 | 2.82 | 3000 | 2.2085 |
| 1.6245 | 3.77 | 4000 | 2.1260 |
| 1.4314 | 4.71 | 5000 | 2.0705 |
| 1.2698 | 5.65 | 6000 | 2.0603 |
| 1.1281 | 6.59 | 7000 | 2.0599 |
| 1.0108 | 7.53 | 8000 | 2.0769 |
| 0.9167 | 8.47 | 9000 | 2.0870 |
| 0.8551 | 9.42 | 10000 | 2.0943 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Appolo/TestModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: keras
tags:
- gan
- dcgan
- huggan
- tensorflow
- unconditional-image-generation
---
## Model description
Simple DCGAN implementation in TensorFlow to generate CryptoPunks.
## Generated samples
<img src="https://github.com/dimitreOliveira/cryptogans/raw/main/assets/gen_samples.png" width="350" height="350">
Project repository: [CryptoGANs](https://github.com/dimitreOliveira/cryptogans).
## Usage
You can play with the HuggingFace [space demo](https://huggingface.co/spaces/huggan/crypto-gan).
Or try it yourself
```python
import tensorflow as tf
import matplotlib.pyplot as plt
from huggingface_hub import from_pretrained_keras
seed = 42
n_images = 36
codings_size = 100
generator = from_pretrained_keras("huggan/crypto-gan")
def generate(generator, seed):
noise = tf.random.normal(shape=[n_images, codings_size], seed=seed)
generated_images = generator(noise, training=False)
fig = plt.figure(figsize=(10, 10))
for i in range(generated_images.shape[0]):
plt.subplot(6, 6, i+1)
plt.imshow(generated_images[i, :, :, :])
plt.axis('off')
plt.savefig("samples.png")
generate(generator, seed)
```
## Training data
For training, I used the 10000 CryptoPunks images.
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
ArBert/albert-base-v2-finetuned-ner-agglo | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
datasets:
- damlab/uniprot
metrics:
- accuracy
widget:
- text: 'involved_in GO:0006468 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'
example_title: 'Function'
---
# GO-Language model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
This model was built as a way to encode the Gene Ontology definition of a protein as vector representation.
It was trained on a collection of gene-ontology terms from model organisms.
Each function was sorted by the ID number and combined with its annotation description ie (`is_a`, `enables`, `located_in`, etc).
The model is tokenized such that each description and GO term is its own token.
This is intended to be used as a translation model between PROT-BERT and GO-Language.
That type of translation model will be useful for predicting the function of novel genes.
## Model Description
This model was trained using the damlab/uniprot dataset on the `go` field with 256 token chunks and a 15% mask rate.
## Intended Uses & Limitations
This model is a useful encapsulation of gene ontology functions.
It allows both an exploration of gene-level similarities as well as comparisons between functional terms.
## How to use
As this is a BERT-style Masked Language learner, it can be used to determine the most likely token a masked position.
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="damlab/GO-language")
unmasker("involved_in [MASK] involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372")
[{'score': 0.1040298342704773,
'token': 103,
'token_str': 'GO:0002250',
'sequence': 'involved_in GO:0002250 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.018045395612716675,
'token': 21,
'token_str': 'GO:0005576',
'sequence': 'involved_in GO:0005576 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.015035462565720081,
'token': 50,
'token_str': 'GO:0000139',
'sequence': 'involved_in GO:0000139 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01181247178465128,
'token': 37,
'token_str': 'GO:0007165',
'sequence': 'involved_in GO:0007165 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01000668853521347,
'token': 14,
'token_str': 'GO:0005737',
'sequence': 'involved_in GO:0005737 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'}
]
```
## Training Data
The dataset was trained using [damlab/uniprot](https://huggingface.co/datasets/damlab/uniprot) from a random initial model.
The Gene Ontology functions were sorted (by ID number) along with annotating term.
## Training Procedure
### Preprocessing
All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.
## BibTeX Entry and Citation Info
[More Information Needed]
|
ArBert/albert-base-v2-finetuned-ner-gmm | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-04-08T18:37:04Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
ArBert/albert-base-v2-finetuned-ner-kmeans | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-04-08T18:46:39Z | # poetry-generation-nextline-mbart-gut-en-single
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `gut`: trained on Project Gutenberg data
* `en`: English language
* `single`: uses only last poem line as input for generation |
Augustab/distilbert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-all-translated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all-translated
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2067 | 1.0 | 6319 | 0.5775 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Axon/resnet50-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-09T19:16:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.1071
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4351
- Rouge1: 26.1071
- Rouge2: 9.3627
- Rougel: 22.0825
- Rougelsum: 25.4514
- Gen Len: 18.474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9216 | 0.13 | 5000 | 2.6385 | 23.8039 | 7.8863 | 20.0109 | 23.0802 | 18.3481 |
| 2.8158 | 0.25 | 10000 | 2.5884 | 24.2567 | 8.2003 | 20.438 | 23.5325 | 18.3833 |
| 2.7743 | 0.38 | 15000 | 2.5623 | 24.8471 | 8.3768 | 20.8711 | 24.1114 | 18.2901 |
| 2.7598 | 0.51 | 20000 | 2.5368 | 25.1566 | 8.6721 | 21.1896 | 24.4558 | 18.3561 |
| 2.7192 | 0.64 | 25000 | 2.5220 | 25.3477 | 8.8106 | 21.3799 | 24.6742 | 18.3108 |
| 2.7207 | 0.76 | 30000 | 2.5114 | 25.5912 | 8.998 | 21.5508 | 24.9344 | 18.3445 |
| 2.7041 | 0.89 | 35000 | 2.4993 | 25.457 | 8.8644 | 21.4516 | 24.7965 | 18.4354 |
| 2.687 | 1.02 | 40000 | 2.4879 | 25.5886 | 8.9766 | 21.6794 | 24.9512 | 18.4035 |
| 2.6652 | 1.14 | 45000 | 2.4848 | 25.7367 | 9.078 | 21.7096 | 25.0924 | 18.4328 |
| 2.6536 | 1.27 | 50000 | 2.4761 | 25.7368 | 9.1609 | 21.729 | 25.0866 | 18.3117 |
| 2.6589 | 1.4 | 55000 | 2.4702 | 25.7738 | 9.1413 | 21.7492 | 25.114 | 18.4862 |
| 2.6384 | 1.53 | 60000 | 2.4620 | 25.7433 | 9.1356 | 21.8198 | 25.0896 | 18.489 |
| 2.6337 | 1.65 | 65000 | 2.4595 | 26.0919 | 9.2605 | 21.9447 | 25.4065 | 18.4083 |
| 2.6375 | 1.78 | 70000 | 2.4557 | 26.0912 | 9.3469 | 22.0182 | 25.4428 | 18.4133 |
| 2.6441 | 1.91 | 75000 | 2.4502 | 26.1366 | 9.3143 | 22.058 | 25.4673 | 18.4972 |
| 2.6276 | 2.03 | 80000 | 2.4478 | 25.9929 | 9.2464 | 21.9271 | 25.3263 | 18.469 |
| 2.6062 | 2.16 | 85000 | 2.4467 | 26.0465 | 9.3166 | 22.0342 | 25.3998 | 18.3777 |
| 2.6126 | 2.29 | 90000 | 2.4407 | 26.1953 | 9.3848 | 22.1148 | 25.5161 | 18.467 |
| 2.6182 | 2.42 | 95000 | 2.4397 | 26.1331 | 9.3626 | 22.1076 | 25.4627 | 18.4413 |
| 2.6041 | 2.54 | 100000 | 2.4375 | 26.1301 | 9.3567 | 22.0869 | 25.465 | 18.4929 |
| 2.5996 | 2.67 | 105000 | 2.4367 | 26.0956 | 9.3314 | 22.063 | 25.4242 | 18.5074 |
| 2.6144 | 2.8 | 110000 | 2.4355 | 26.1764 | 9.4157 | 22.1231 | 25.5175 | 18.4729 |
| 2.608 | 2.93 | 115000 | 2.4351 | 26.1071 | 9.3627 | 22.0825 | 25.4514 | 18.474 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ayham/albert_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- huggan
- gan
- unconditional-image-generation
datasets:
- huggan/few-shot-fauvism-still-life
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# Generate fauvism still life image using FastGAN
## Model description
[FastGAN model](https://arxiv.org/abs/2101.04775) is a Generative Adversarial Networks (GAN) training on a small amount of high-fidelity images with minimum computing cost. Using a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder, the model was able to converge after some hours of training for either 100 high-quality images or 1000 images datasets.
This model was trained on a dataset of 124 high-quality Fauvism painting images.
#### How to use
```python
# Clone this model
git clone https://huggingface.co/huggan/fastgan-few-shot-fauvism-still-life/
def load_generator(model_name_or_path):
generator = Generator(in_channels=256, out_channels=3)
generator = generator.from_pretrained(model_name_or_path, in_channels=256, out_channels=3)
_ = generator.eval()
return generator
def _denormalize(input: torch.Tensor) -> torch.Tensor:
return (input * 127.5) + 127.5
# Load generator
generator = load_generator("huggan/fastgan-few-shot-fauvism-still-life")
# Generate a random noise image
noise = torch.zeros(1, 256, 1, 1, device=device).normal_(0.0, 1.0)
with torch.no_grad():
gan_images, _ = generator(noise)
gan_images = _denormalize(gan_images.detach())
save_image(gan_images, "sample.png", nrow=1, normalize=True)
```
#### Limitations and bias
* Converge faster and better with small datasets (less than 1000 samples)
## Training data
[few-shot-fauvism-still-life](https://huggingface.co/datasets/huggan/few-shot-fauvism-still-life)
## Generated Images

### BibTeX entry and citation info
```bibtex
@article{FastGAN,
title={Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis},
author={Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal},
journal={ICLR},
year={2021}
}
``` |
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-04-10T02:20:24Z | ## Usage
The model can be used directly (without a language model) as follows:
---
language:
- ne
tags:
- speech-to-text
---
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("shniranjan/wav2vec2-large-xlsr-1b-nepali")
model = Wav2Vec2ForCTC.from_pretrained("shniranjan/wav2vec2-large-xlsr-1b-nepali")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
BatuhanYilmaz/mlm-finetuned-imdb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-10T08:52:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-1
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0133 | 1.0 | 1388 | 2.8166 |
| 2.8418 | 2.0 | 2776 | 2.7113 |
| 2.7683 | 3.0 | 4164 | 2.6634 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Baybars/debateGPT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-10T10:06:19Z | ---
language: en
thumbnail: http://www.huggingtweets.com/fitfounder/1649585355118/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1279092409587163137/eN82f_KT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dan Go</div>
<div style="text-align: center; font-size: 14px;">@fitfounder</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dan Go.
| Data | Dan Go |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 47 |
| Short tweets | 653 |
| Tweets kept | 2550 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lrz0j2b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fitfounder's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8hmcij96) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8hmcij96/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fitfounder')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Beatriz/model_name | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-10T10:36:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-all-squad_que_translated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all-squad_que_translated
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0746 | 1.0 | 18011 | 0.5174 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Bee-Garbs/DialoGPT-cartman-small | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8590909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1380
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 |
| 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 |
| 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Bee-Garbs/DialoGPT-real-cartman-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-04-10T10:46:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2218
- Accuracy: 0.9205
- F1: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3223 | 0.9005 | 0.8971 |
| 0.2474 | 2.0 | 500 | 0.2218 | 0.9205 | 0.9208 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.12.1
|
BertChristiaens/EmojiPredictor | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-04-10T11:33:09Z | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- sst2
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-sst2
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **sst2** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9266 | 1.3676 | 637.636 | 20.475 | 0.2503 | 872 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
Berzemu/Coco | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-10T11:41:07Z | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- sst2
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-sst2
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **sst2** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9312 | 1.5334 | 568.684 | 18.261 | 0.2929 | 872 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
Betaniaolivo/Foto | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- cifar100
widget:
- src: https://huggingface.co/daveni/upside_down_classifier/resolve/main/meme_upside_down.jpg
example_title: Upside down example
- src: https://huggingface.co/daveni/upside_down_classifier/resolve/main/meme.jpg
example_title: Original example
---
# Upside Down Classifier |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
language:
- python
tags:
- conversation
--- |
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: syedyusufali/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syedyusufali/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0900
- Validation Loss: 0.1200
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2904 | 0.1482 | 0 |
| 0.1317 | 0.1186 | 1 |
| 0.0900 | 0.1200 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dccuchile/albert-xlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-04-11T09:03:04Z | ---
license: apache-2.0
language: en
library: transformers
other: distilbert
datasets:
- Short Question Answer Assessment Dataset
---
# DistilBERT base uncased model for Short Question Answer Assessment
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model.
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-uncased) on
[Question Answer Assessment dataset](#)
## Intended uses & limitations
This can only be used for the kind of questions and answers provided by that are similar to the ones in the dataset of [Banjade et al.](https://aclanthology.org/W16-0520.pdf).
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment", return_all_scores=True)
>>> context = "To rescue a child who has fallen down a well, rescue workers fasten him to a rope, the other end of which is then reeled in by a machine. The rope pulls the child straight upward at steady speed."
>>> question = "How does the amount of tension in the rope compare to the downward force of gravity acting on the child?"
>>> ref_answer = "Since the child is being raised straight upward at a constant speed, the net force on the child is zero and all the forces balance. That means that the tension in the rope balances the downward force of gravity."
>>> student_answer = "The tension force is higher than the force of gravity."
>>>
>>> body = " [SEP] ".join([context, question, ref_answer, student_answer])
>>> raw_results = classifier([body])
>>> raw_results
[[{'label': 'LABEL_0', 'score': 0.0004029414849355817},
{'label': 'LABEL_1', 'score': 0.0005476847873069346},
{'label': 'LABEL_2', 'score': 0.998059093952179},
{'label': 'LABEL_3', 'score': 0.0009902542224153876}]]
>>> _LABELS_ID2NAME = {0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}
>>> results = []
>>> for result in raw_results:
for score in result:
results.append([
{_LABELS_ID2NAME[int(score["label"][-1:])]: "%.2f" % score["score"]}
])
>>> results
[[{'correct': '0.00'}],
[{'correct_but_incomplete': '0.00'}],
[{'contradictory': '1.00'}],
[{'incorrect': '0.00'}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
Also one of the limiations of this model is the length, longer sequences would lead to wrong predictions, due to the pre-processing phase (after concatentating the input sequences, the important student answer might be pruned!)
## Pre-training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Fine-tuning data
The annotated dataset consists of 900 studentsβ short constructed answers and their correctness in the given context. Four qualitative levels of correctness are defined, correct, correct-but-incomplete, contradictory and Incorrect.
## Training procedure
### Preprocessing
In the preprocessing phase, the following parts are concatenated: _question context_, _question_, _reference_answer_, and _student_answer_ using the separator `[SEP]`.
This makes the full text as:
```
[CLS] Context Sentence [SEP] Question Sentence [SEP] Reference Answer Sentence [SEP] Student Answer Sentence [CLS]
```
The data are splitted according to the following ratio:
- Training set 80%.
- Test set 20%.
Lables are mapped as: `{0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 20 minuts. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 8 |
| Epochs | 4 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:----------:|:--------:|
| 1 | No log | 0.665765 | 0.755330 | 0.743574 | 0.781210 | 0.755330 |
| 2 | 0.932100 | 0.362124 | 0.890355 | 0.889875 | 0.891407 | 0.890355 |
| 3 | 0.364900 | 0.226225 | 0.942132 | 0.941802 | 0.942458 | 0.942132 |
| 3 | 0.176900 | 0.193660 | 0.954315 | 0.954175 | 0.954985 | 0.954315 |
## Evaluation results
When fine-tuned on downstream task of Question Answer Assessment, 4 class classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------------------:|:----------:|:-------:|:--------:|:-------:|
| _correct_ | 0.938 | 0.989 | 0.963 | 366 |
| _correct_but_incomplete_ | 0.975 | 0.922 | 0.948 | 257 |
| _contradictory_ | 0.946 | 0.938 | 0.942 | 113 |
| _incorrect_ | 0.963 | 0.944 | 0.953 | 249 |
| accuracy | - | - | 0.954 | 985 |
| macro avg | 0.956 | 0.948 | 0.952 | 985 |
| weighted avg | 0.955 | 0.954 | 0.954 | 985 |
Confusion matrix:
| Actual \ Predicted | _correct_ | _correct_but_incomplete_ | _contradictory_ | _incorrect_ |
|:------------------------:|:---------:|:------------------------:|:---------------:|:-----------:|
| _correct_ | 362 | 4 | 0 | 0 |
| _correct_but_incomplete_ | 13 | 237 | 0 | 7 |
| _contradictory_ | 4 | 1 | 106 | 2 |
| _incorrect_ | 7 | 1 | 6 | 235 |
The AUC score is: 'micro'= **0.9695** and 'macro': **0.9659**
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2022-04-11T09:03:47Z | ---
license: apache-2.0
language: en
library: transformers
other: distilroberta
datasets:
- Short Question Answer Assessment Dataset
---
# DistilRoBERTa base model for Short Question Answer Assessment
## Model description
The pre-trained model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased).
The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
This model is case-sensitive: it makes a difference between english and English.
The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base.
We encourage to check [RoBERTa-base model](https://huggingface.co/roberta-base) to know more about usage, limitations and potential biases.
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilRoBERTa model](https://huggingface.co/distilroberta-base) on
[Question Answer Assessment dataset](#)
## Intended uses & limitations
This can only be used for the kind of questions and answers provided by that are similar to the ones in the dataset of [Banjade et al.](https://aclanthology.org/W16-0520.pdf).
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilroberta-base-finetuned-short-answer-assessment", return_all_scores=True)
>>> context = "To rescue a child who has fallen down a well, rescue workers fasten him to a rope, the other end of which is then reeled in by a machine. The rope pulls the child straight upward at steady speed."
>>> question = "How does the amount of tension in the rope compare to the downward force of gravity acting on the child?"
>>> ref_answer = "Since the child is being raised straight upward at a constant speed, the net force on the child is zero and all the forces balance. That means that the tension in the rope balances the downward force of gravity."
>>> student_answer = "The tension force is higher than the force of gravity."
>>>
>>> body = " [SEP] ".join([context, question, ref_answer, student_answer])
>>> raw_results = classifier([body])
>>> raw_results
[[{'label': 'LABEL_0', 'score': 0.0004029414849355817},
{'label': 'LABEL_1', 'score': 0.0005476847873069346},
{'label': 'LABEL_2', 'score': 0.998059093952179},
{'label': 'LABEL_3', 'score': 0.0009902542224153876}]]
>>> _LABELS_ID2NAME = {0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}
>>> results = []
>>> for result in raw_results:
for score in result:
results.append([
{_LABELS_ID2NAME[int(score["label"][-1:])]: "%.2f" % score["score"]}
])
>>> results
[[{'correct': '0.00'}],
[{'correct_but_incomplete': '0.00'}],
[{'contradictory': '1.00'}],
[{'incorrect': '0.00'}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
Also one of the limiations of this model is the length, longer sequences would lead to wrong predictions, due to the pre-processing phase (after concatentating the input sequences, the important student answer might be pruned!)
## Pre-training data
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
## Fine-tuning data
The annotated dataset consists of 900 studentsβ short constructed answers and their correctness in the given context. Four qualitative levels of correctness are defined, correct, correct-but-incomplete, contradictory and Incorrect.
## Training procedure
### Preprocessing
In the preprocessing phase, the following parts are concatenated: _question context_, _question_, _reference_answer_, and _student_answer_ using the separator `[SEP]`.
This makes the full text as:
```
[CLS] Context Sentence [SEP] Question Sentence [SEP] Reference Answer Sentence [SEP] Student Answer Sentence [CLS]
```
The data are splitted according to the following ratio:
- Training set 80%.
- Test set 20%.
Lables are mapped as: `{0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 20 minuts. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 8 |
| Epochs | 4 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:----------:|:--------:|
| 1 | No log | 0.773334 | 0.713706 | 0.711398 | 0.746059 | 0.713706 |
| 2 | 1.069200 | 0.404932 | 0.885279 | 0.884592 | 0.886699 | 0.885279 |
| 3 | 0.473700 | 0.247099 | 0.931980 | 0.931675 | 0.933794 | 0.931980 |
| 3 | 0.228000 | 0.205577 | 0.954315 | 0.954210 | 0.955258 | 0.954315 |
## Evaluation results
When fine-tuned on downstream task of Question Answer Assessment 4 class classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------------------:|:----------:|:-------:|:--------:|:-------:|
| _correct_ | 0.933 | 0.992 | 0.962 | 366 |
| _correct_but_incomplete_ | 0.976 | 0.934 | 0.954 | 257 |
| _contradictory_ | 0.938 | 0.929 | 0.933 | 113 |
| _incorrect_ | 0.975 | 0.932 | 0.953 | 249 |
| accuracy | - | - | 0.954 | 985 |
| macro avg | 0.955 | 0.947 | 0.950 | 985 |
| weighted avg | 0.955 | 0.954 | 0.954 | 985 |
Confusion matrix:
| Actual \ Predicted | _correct_ | _correct_but_incomplete_ | _contradictory_ | _incorrect_ |
|:------------------------:|:---------:|:------------------------:|:---------------:|:-----------:|
| _correct_ | 363 | 3 | 0 | 0 |
| _correct_but_incomplete_ | 14 | 240 | 0 | 3 |
| _contradictory_ | 5 | 0 | 105 | 3 |
| _incorrect_ | 7 | 3 | 7 | 232 |
The AUC score is: 'micro'= **0.9695** and 'macro': **0.9650**
|
dccuchile/albert-xxlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
language:
- en
datasets:
- cifar10
---
# How to run locally from GitHub
- [ ] ```git clone https://github.com/majauhar/UpsideDownDetector.git```
- [ ] ```cd UpsideDownDetector```
- [ ] ```pip install -r requirements.txt```
- [ ] ```python main.py --epochs=<Integer> --pretrained=[True/False]```
|
dccuchile/albert-xxlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-efficient-base-finetuned-1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-efficient-base-finetuned-1.2
This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5294
- Rouge1: 62.691
- Rouge2: 55.9731
- Rougel: 60.9097
- Rougelsum: 61.4393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4662
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2424 | 1.0 | 1217 | 1.7042 | 34.2215 | 24.2754 | 31.7289 | 32.4237 |
| 1.7716 | 2.0 | 2434 | 1.6184 | 43.4774 | 34.0476 | 41.3691 | 41.9132 |
| 1.6324 | 3.0 | 3651 | 1.5811 | 49.1441 | 40.7935 | 47.0077 | 47.6388 |
| 1.5226 | 4.0 | 4868 | 1.5243 | 54.4769 | 46.3387 | 52.3289 | 52.9555 |
| 1.4121 | 5.0 | 6085 | 1.5040 | 56.8792 | 49.1963 | 54.7327 | 55.2805 |
| 1.331 | 6.0 | 7302 | 1.4930 | 58.6896 | 51.1683 | 56.7096 | 57.3605 |
| 1.2677 | 7.0 | 8519 | 1.4785 | 59.9285 | 52.4631 | 57.8575 | 58.4203 |
| 1.2175 | 8.0 | 9736 | 1.4839 | 60.0299 | 52.8806 | 58.0099 | 58.6348 |
| 1.1782 | 9.0 | 10953 | 1.4908 | 61.247 | 54.0887 | 59.2175 | 59.7658 |
| 1.1442 | 10.0 | 12170 | 1.4882 | 61.9895 | 54.9455 | 60.0728 | 60.5786 |
| 1.1118 | 11.0 | 13387 | 1.5061 | 62.1077 | 55.1276 | 60.2218 | 60.7475 |
| 1.081 | 12.0 | 14604 | 1.5078 | 61.6083 | 54.6805 | 59.7912 | 60.2489 |
| 1.0668 | 13.0 | 15821 | 1.5200 | 62.3075 | 55.5201 | 60.5192 | 60.9557 |
| 1.0488 | 14.0 | 17038 | 1.5344 | 62.5144 | 55.6332 | 60.6845 | 61.1715 |
| 1.0324 | 15.0 | 18255 | 1.5313 | 62.7697 | 56.0313 | 60.9298 | 61.4739 |
| 1.0302 | 16.0 | 19472 | 1.5294 | 62.691 | 55.9731 | 60.9097 | 61.4393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
dccuchile/albert-xlarge-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 91 | 2022-04-11T10:27:35Z | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9319354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-finetuned-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5252
- Accuracy: 0.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 60 | 4.6555 | 0.1887 |
| No log | 2.0 | 120 | 3.8771 | 0.4784 |
| No log | 3.0 | 180 | 3.2507 | 0.7352 |
| 3.9668 | 4.0 | 240 | 2.7445 | 0.8365 |
| 3.9668 | 5.0 | 300 | 2.3475 | 0.8865 |
| 3.9668 | 6.0 | 360 | 2.0370 | 0.8926 |
| 3.9668 | 7.0 | 420 | 1.8099 | 0.9145 |
| 2.0924 | 8.0 | 480 | 1.6433 | 0.9190 |
| 2.0924 | 9.0 | 540 | 1.5563 | 0.9281 |
| 2.0924 | 10.0 | 600 | 1.5252 | 0.9319 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2022-04-11T10:39:58Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain π€"
datasets:
- yogi/autotrain-data-amazon_text_sum
co2_eq_emissions: 2986.6520132805163
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 730222226
- CO2 Emissions (in grams): 2986.6520132805163
## Validation Metrics
- Loss: 2.682709217071533
- Rouge1: 19.6069
- Rouge2: 7.3367
- RougeL: 19.2706
- RougeLsum: 19.286
- Gen Len: 5.5731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/yogi/autotrain-amazon_text_sum-730222226
``` |
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | # DistilBERT with word2vec token embeddings
This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.37M steps (batch size 64). The token embeddings were NOT updated.
For the initial word2vec weights with Gensim see: [https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec](https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec)
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-04-11T11:14:12Z | # DistilBERT with 256k token embeddings
This model was initialized with a word2vec token embedding matrix with 256k entries, but these token embeddings were updated during MLM. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.55M steps (batch size 64). The token embeddings were updated during MLM.
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | 2022-04-11T11:18:50Z | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 60 | 0.8171 | 0.2490 |
| No log | 2.0 | 120 | 0.7039 | 0.6568 |
| No log | 3.0 | 180 | 0.6067 | 0.7932 |
| 0.7269 | 4.0 | 240 | 0.5270 | 0.8674 |
| 0.7269 | 5.0 | 300 | 0.4659 | 0.9010 |
| 0.7269 | 6.0 | 360 | 0.4201 | 0.9194 |
| 0.7269 | 7.0 | 420 | 0.3867 | 0.9352 |
| 0.4426 | 8.0 | 480 | 0.3649 | 0.9352 |
| 0.4426 | 9.0 | 540 | 0.3520 | 0.9403 |
| 0.4426 | 10.0 | 600 | 0.3479 | 0.94 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9788888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-cifar10
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0690
- Accuracy: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2446 | 1.0 | 190 | 0.1128 | 0.9659 |
| 0.1722 | 2.0 | 380 | 0.1034 | 0.9663 |
| 0.1355 | 3.0 | 570 | 0.0690 | 0.9789 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-04-11T12:42:06Z | ---
tags:
- espnet
- audio
- audio-to-audio
language:
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_dc_crn_mapping_snr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_dc_crn_mapping_snr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_dc_crn_mapping_snr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dc_crn_mapping_snr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 43524
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 1.0e-07
amsgrad: true
scheduler: steplr
scheduler_conf:
step_size: 2
gamma: 0.98
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
criterions:
- name: snr
conf:
eps: 1.0e-07
wrapper: pit
wrapper_conf:
weight: 1.0
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 256
hop_length: 128
separator: dc_crn
separator_conf:
num_spk: 1
input_channels:
- 10
- 16
- 32
- 64
- 128
- 256
enc_hid_channels: 8
enc_layers: 5
glstm_groups: 2
glstm_layers: 2
glstm_bidirectional: true
glstm_rearrange: false
mode: mapping
ref_channel: 3
decoder: stft
decoder_conf:
n_fft: 256
hop_length: 128
required:
- output_dir
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: van-base-finetuned-eurosat-imgaug
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9885185185185185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# van-base-finetuned-eurosat-imgaug
This model is a fine-tuned version of [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0379
- Accuracy: 0.9885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0887 | 1.0 | 190 | 0.0589 | 0.98 |
| 0.055 | 2.0 | 380 | 0.0390 | 0.9878 |
| 0.0223 | 3.0 | 570 | 0.0379 | 0.9885 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-04-11T13:02:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-mlm
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0870
- Accuracy: 0.7576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- wsj0_2mix
license: cc-by-4.0
---
## ESPnet2 ENH model
### `lichenda/Chenda_Li_wsj0_2mix_enh_dprnn_tasnet`
This model was trained by LiChenda using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
Imported from [zenodo](https://zenodo.org/record/4688000).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 54919e2529d6f58f4550d4a72960f57b83f66dc9
pip install -e .
cd egs2/wsj0_2mix/enh1
./run.sh --skip_data_prep false --skip_train true --download_model lichenda/Chenda_Li_wsj0_2mix_enh_dprnn_tasnet
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Thu Apr 15 00:03:19 CST 2021`
- python version: `3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]`
- espnet version: `espnet 0.9.8`
- pytorch version: `pytorch 1.5.0`
- Git hash: `2aa2f151b5929dc9ffa4df39a8d8c26ca4dbdb85`
- Commit date: `Tue Mar 30 09:08:27 2021 +0900`
## enh_train_enh_dprnn_tasnet_raw
config: conf/tuning/train_enh_dprnn_tasnet.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_min_8k|0.960037|19.0476|18.5438|29.1591|
|enhanced_tt_min_8k|0.968376|18.8209|18.2925|28.929|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_dprnn_tasnet.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dprnn_tasnet_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45126
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
detect_anomaly: false
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 4
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_8k/train/speech_mix_shape
- exp/enh_stats_8k/train/speech_ref1_shape
- exp/enh_stats_8k/train/speech_ref2_shape
valid_shape_file:
- exp/enh_stats_8k/valid/speech_mix_shape
- exp/enh_stats_8k/valid/speech_ref1_shape
- exp/enh_stats_8k/valid/speech_ref2_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/spk2.scp
- speech_ref2
- sound
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/spk2.scp
- speech_ref2
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 1
init: xavier_uniform
model_conf:
loss_type: si_snr
use_preprocessor: false
encoder: conv
encoder_conf:
channel: 64
kernel_size: 2
stride: 1
separator: dprnn
separator_conf:
num_spk: 2
layer: 6
rnn_type: lstm
bidirectional: true
nonlinear: relu
unit: 128
segment_size: 250
dropout: 0.1
decoder: conv
decoder_conf:
channel: 64
kernel_size: 2
stride: 1
required:
- output_dir
version: 0.9.8
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dccuchile/distilbert-base-spanish-uncased | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 670 | 2022-04-11T13:17:43Z | ---
tags:
- espnet
- audio
- audio-to-audio
language:
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_conv_tasnet.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_conv_tasnet_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 57680
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_1ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_1ch_track/spk1.scp
- speech_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_1ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_1ch_track/spk1.scp
- speech_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 1.0e-05
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 3
init: xavier_uniform
model_conf:
loss_type: si_snr
use_preprocessor: false
encoder: conv
encoder_conf:
channel: 256
kernel_size: 20
stride: 10
separator: tcn
separator_conf:
num_spk: 1
layer: 8
stack: 4
bottleneck_dim: 256
hidden_dim: 512
kernel: 3
causal: false
norm_type: gLN
nonlinear: relu
decoder: conv
decoder_conf:
channel: 256
kernel_size: 20
stride: 10
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1 | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-04-11T13:32:35Z | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
---
# Nowcasting CNN
## Model description
3d conv model, that takes in different data streams
architecture is roughly
1. satellite image time series goes into many 3d convolution layers.
2. nwp time series goes into many 3d convolution layers.
3. Final convolutional layer goes to full connected layer. This is joined by
other data inputs like
- pv yield
- time variables
Then there ~4 fully connected layers which end up forecasting the
pv yield / gsp into the future
## Intended uses & limitations
Forecasting short term PV power for different regions and nationally in the UK
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
Training data is EUMETSAT RSS imagery over the UK, on-the-ground PV data, and NWP predictions.
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-04-11T13:38:35Z | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Neuron conversation
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9389999
## Deploy/use Model
If you want to use this model checkout the following notenbook: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/main/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb)
```python
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=s3_model_uri, # path to your model and script
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.12", # transformers version used
pytorch_version="1.9", # pytorch version used
py_version='py37', # python version used
)
# Let SageMaker know that we've already compiled the model via neuron-cc
huggingface_model._is_compiled_model = True
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type="ml.inf1.xlarge" # AWS Inferentia Instance
)
``` |
Chaewon/mmnt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: nl
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- robbert
datasets:
- clips/mqa
---
# jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). It was fine-tuned on 1,000,000 rows of Dutch FAQ question-answer pairs from [clips/mqa](https://huggingface.co/datasets/clips/mqa).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
model = AutoModel.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12500 with parameters:
```
{'batch_size': 80, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Chaewon/mnmt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-04-11T13:40:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- type: precision
value: 0.9227969559942649
name: Precision
- type: recall
value: 0.9360107394563151
name: Recall
- type: f1
value: 0.9293568810396535
name: F1
- type: accuracy
value: 0.9833034139831922
name: Accuracy
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- type: accuracy
value: 0.973914094330502
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmJmZTE4OGY4MmNlZGJmMzJmZGYxMjQ5Nzc4MzEzODU2YjQwZWQ1ZDU1N2NmN2M2YjliZTQ3MmZhZjA2OGYwNCIsInZlcnNpb24iOjF9.w_Y03WPSKDkQnyC3FFw4qtffWqg4ZbjJ6zyIEl6dKTCf6rgrjbhJKIb3MsOIw34Ydb-M3TTpV2Ak43bsaXQ-DA
- type: precision
value: 0.9791360147483736
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODY2YjUwYTk5NGE1YWJlYjMyM2MyZGU4ZjE2MTM1ZGZiZDg4MTFjMGRkNzI5ODQ0ZTBlMmVkYzkyODIwYjgxMCIsInZlcnNpb24iOjF9.nChULEs9H0UFNtlM4m_kuBm9Ch981r7V4Axo1yvPIoPAPd6GyCopO615pyjd7bwXxYy4_nQpc1cBI5iY0OkHDA
- type: recall
value: 0.9793269742207723
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmI0MzRkZjY4M2Y5YWE1OTdjNDNlN2NmNDVhMmEwODI2MmM1ZTViNDc1NzllZDdkOWZiZWVkMjQxNGM0YTQyZCIsInZlcnNpb24iOjF9.jS1iBDeJK7_QB7kanNxyfAnZm0HdS_EqBPjBCVhYCPEMRLnuXeuztdz_G4MczcZV6F2RoDjLJzxJdbuzKN1eCw
- type: f1
value: 0.9792314851748437
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmQ1MjM1ZDU2YzlmY2JkYTU0MjU5MTIzNDc3MDZmNzJjZmNkNzI1ZDY0MWFmYjBhZjI5NTg3ZjY0NGFlYWZmOSIsInZlcnNpb24iOjF9.BtgL5tCizs8iH7LHOfl1aRfaW0Nxfx6kWldUmWbjDk_McZrK6BRxFnHDscVZ1wUa11rX1IjgC1_DOcMNBXq6BQ
- type: loss
value: 0.10710480064153671
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWU0MDY3OTAxZTUyNmNlMjA1MDdiNTg4ZmI4MTJmMDYyMTY4MjZjYzNkODFlMDY1M2RjMjMyNDkzNzBkMmQzNiIsInZlcnNpb24iOjF9.dU5jfYPYWXkiebzZ_c4HTxui6RoYYfAdShcSzXBY0v-pB9FEwm_-8vHOtT-rK_s_EwifpPobRfdpXL2Y7C33CA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9228
- Recall: 0.9360
- F1: 0.9294
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2433 | 1.0 | 878 | 0.0732 | 0.9079 | 0.9190 | 0.9134 | 0.9795 |
| 0.0553 | 2.0 | 1756 | 0.0599 | 0.9170 | 0.9333 | 0.9251 | 0.9826 |
| 0.0305 | 3.0 | 2634 | 0.0614 | 0.9228 | 0.9360 | 0.9294 | 0.9833 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ciruzzo/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-04-11T14:31:02Z | ---
license: mit
---
This Repository includes the files required to run the `Computer Science Named Entity Recognition (CS-NER)` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service. |
Clarianliz30/Caitlyn | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-11T14:52:31Z | ---
library_name: keras
---
This model is a TensorFlow port of DINO [1] ViT B-16 [2]. The backbone of this model was pre-trained using the DINO pretext task. After that its head layer was trained
by keeping the backbone frozen. ImageNet-1k dataset was used for training purposes. You can refer to [this notebook](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/load-dino-weights-vitb16.ipynb) to know how the porting was done.
## References
[1] Emerging Properties in Self-Supervised Vision Transformers: https://arxiv.org/abs/2104.14294
[2] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929 |
ClaudeCOULOMBE/RickBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0649
- Wer: 0.0253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0964 | 7.04 | 1000 | 0.0706 | 0.0342 |
| 0.0445 | 14.08 | 2000 | 0.0649 | 0.0253 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Venkatakrishnan-Ramesh/Text_gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-11T15:53:07Z | ---
library_name: keras
---
This model is a TensorFlow port of ViT B-16 [1] trained with recipes from [2]. It was first pre-trained on ImageNet-21k and was then fine-tuned on the ImageNet-1k dataset. You can refer to [this notebook](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/load-jax-weights-vitb16.ipynb) to know how the porting was done.
## References
[1] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929
[2] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 |
Cryptikdw/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 27.9273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2915
- Bleu: 27.9273
- Gen Len: 34.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Culmenus/IceBERT-finetuned-ner | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ParulChaudhari/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ParulChaudhari/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an SQUAD dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3927
- Validation Loss: 1.1305
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 177048, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3927 | 1.1305 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.5.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Culmenus/opus-mt-de-is-finetuned-de-to-is | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b8_lr3e-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 25.9411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b8_lr3e-5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4836
- Rouge1: 25.9411
- Rouge2: 9.226
- Rougel: 21.9087
- Rougelsum: 25.2863
- Gen Len: 18.4076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.912 | 0.25 | 5000 | 2.6285 | 23.6659 | 7.8535 | 19.9837 | 22.9884 | 18.3867 |
| 2.8115 | 0.51 | 10000 | 2.5820 | 24.7979 | 8.4888 | 20.8719 | 24.1321 | 18.3292 |
| 2.767 | 0.76 | 15000 | 2.5555 | 25.0857 | 8.6437 | 21.149 | 24.4256 | 18.2981 |
| 2.742 | 1.02 | 20000 | 2.5330 | 25.3431 | 8.8393 | 21.425 | 24.7032 | 18.3749 |
| 2.7092 | 1.27 | 25000 | 2.5203 | 25.5338 | 8.9281 | 21.5378 | 24.9045 | 18.3399 |
| 2.6989 | 1.53 | 30000 | 2.5065 | 25.4792 | 8.9745 | 21.4941 | 24.8458 | 18.4565 |
| 2.6894 | 1.78 | 35000 | 2.5018 | 25.6815 | 9.1218 | 21.6958 | 25.0557 | 18.406 |
| 2.6897 | 2.03 | 40000 | 2.4944 | 25.8241 | 9.2127 | 21.8205 | 25.1801 | 18.4228 |
| 2.6664 | 2.29 | 45000 | 2.4891 | 25.8241 | 9.1662 | 21.7807 | 25.1615 | 18.4258 |
| 2.6677 | 2.54 | 50000 | 2.4855 | 25.7435 | 9.145 | 21.765 | 25.0858 | 18.4329 |
| 2.6631 | 2.8 | 55000 | 2.4836 | 25.9411 | 9.226 | 21.9087 | 25.2863 | 18.4076 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
model-index:
- name: ft-pt-br-local
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-pt-br-local
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-portuguese) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9327
- Mean Iou: 0.0763
- Mean Accuracy: 0.1260
- Overall Accuracy: 0.5923
- Per Category Iou: [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0]
- Per Category Accuracy: [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.0624 | 0.03 | 10 | 3.1628 | 0.0726 | 0.1219 | 0.5758 | [nan, 0.0878087898079964, 0.611982872765419, 0.0001999765816897758, 0.006930751650791711, 0.0208104329339671, 0.0, 0.0010631316774049914, 0.0, 0.0, 0.4839157481183621, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39292052415275885, 0.0, 0.0003268797082673576, 0.0011424188270622699, 0.0, 0.0, 0.004317032040472175, 3.142508260307427e-05, 0.0, 0.0, 0.5537894233680722, 0.28184052017073197, 0.015966383939961543, 0.0002995587926924772, 0.0005713078253519804, 0.0035316933149879015, 0.0] | [nan, 0.09656561651317118, 0.9239613003877697, 0.00021265611687132485, 0.007163978434475801, 0.0222089828684614, nan, 0.0010774805715464, 0.0, 0.0, 0.8583517795809614, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.705533848895072, 0.0, 0.00033222625115695, 0.0011495555325644448, 0.0, nan, 0.008061062548807214, 3.244014792707455e-05, 0.0, 0.0, 0.8715627360179777, 0.3828074002074446, 0.01597238073499201, 0.0003298619292210546, 0.0011388100215281895, 0.003805890022240969, 0.0] |
| 2.6259 | 0.05 | 20 | 2.9327 | 0.0763 | 0.1260 | 0.5923 | [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0] | [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0] |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CurtisBowser/DialoGPT-medium-sora-two | [
"pytorch",
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-11T18:39:10Z | ---
license: gpl-3.0
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the π€Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner π€!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
CurtisBowser/DialoGPT-medium-sora | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8641 | 0.04 | 500 | 2.6202 |
| 2.7466 | 0.08 | 1000 | 2.5660 |
| 2.8767 | 0.12 | 1500 | 2.5319 |
| 2.7099 | 0.16 | 2000 | 2.5107 |
| 2.7752 | 0.2 | 2500 | 2.4922 |
| 2.6037 | 0.24 | 3000 | 2.4800 |
| 2.8236 | 0.27 | 3500 | 2.4677 |
| 2.7089 | 0.31 | 4000 | 2.4581 |
| 2.7299 | 0.35 | 4500 | 2.4498 |
| 2.7498 | 0.39 | 5000 | 2.4420 |
| 2.6186 | 0.43 | 5500 | 2.4346 |
| 2.7817 | 0.47 | 6000 | 2.4288 |
| 2.5559 | 0.51 | 6500 | 2.4239 |
| 2.6725 | 0.55 | 7000 | 2.4186 |
| 2.6316 | 0.59 | 7500 | 2.4149 |
| 2.5561 | 0.63 | 8000 | 2.4115 |
| 2.5708 | 0.67 | 8500 | 2.4097 |
| 2.5861 | 0.71 | 9000 | 2.4052 |
| 2.6363 | 0.74 | 9500 | 2.4024 |
| 2.7435 | 0.78 | 10000 | 2.4003 |
| 2.7258 | 0.82 | 10500 | 2.3992 |
| 2.6113 | 0.86 | 11000 | 2.3983 |
| 2.6006 | 0.9 | 11500 | 2.3972 |
| 2.5684 | 0.94 | 12000 | 2.3960 |
| 2.6181 | 0.98 | 12500 | 2.3953 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Czapla/Rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the π€Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner π€!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
D-Keqi/espnet_asr_train_asr_streaming_transformer_raw_en_bpe500_sp_valid.acc.ave | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: other
---
UFAL English to French Machine Translation Model based on MarianMT model. |
D3vil/DialoGPT-smaall-harrypottery | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: wtfpl
---
MarianMT trained on the UFAL dataset: English to Spanish Machine Translation model. |
D3xter1922/electra-base-discriminator-finetuned-cola | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | ---
license: wtfpl
---
UFAL English to Romainian Machine Translation Model based on MarianMT model. |
D4RL1NG/yes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "tr"
tags:
- sentiment
- twitter
- turkish
---
This Turkish Sentiment Analysis model is a fine-tuned checkpoint of pretrained [BERTurk model 128k uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) with [BounTi dataset](https://ieeexplore.ieee.org/document/9477814).
## Usage in Hugging Face Pipeline
```
from transformers import pipeline
bounti = pipeline("sentiment-analysis",model="akoksal/bounti")
print(bounti("Bu yemeΔi pek sevmedim"))
>> [{'label': 'negative', 'score': 0.8012508153915405}]
```
## Results
The scores of the finetuned model with BERTurk:
||Accuracy|Precision|Recall|F1|
|-------------|:---------:|:---------:|:------:|:-----:|
|Validation|0.745|0.706|0.730|0.715|
|Test|0.723|0.692|0.729|0.701|
## Dataset
You can find the dataset in [our Github repo](https://github.com/boun-tabi/BounTi-Turkish-Sentiment-Analysis) with the training, validation, and test splits.
Due to Twitter copyright, we cannot release the full text of the tweets. We share the tweet IDs, and the full text can be downloaded through official Twitter API.
| | Training | Validation | Test |
|----------|:--------:|:----------:|:----:|
| Positive | 1691 | 188 | 469 |
| Neutral | 3034 | 338 | 843 |
| Negative | 1008 | 113 | 280 |
| Total | 5733 | 639 | 1592 |
## Citation
You can cite the following paper if you use our work:
```
@INPROCEEDINGS{BounTi,
author={KΓΆksal, Abdullatif and ΓzgΓΌr, Arzucan},
booktitle={2021 29th Signal Processing and Communications Applications Conference (SIU)},
title={Twitter Dataset and Evaluation of Transformers for Turkish Sentiment Analysis},
year={2021},
volume={},
number={}
}
```
---
|
DARKVIP3R/DialoGPT-medium-Anakin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
model-index:
- name: ft-pt-br-local-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-pt-br-local-2
This model is a fine-tuned version of [tonyalves/output](https://huggingface.co/tonyalves/output) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
DCU-NLP/bert-base-irish-cased-v1 | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,244 | null | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
(makes two sentences, one sentence) (probably will not work all that well)
```
initial: phone books used to be everywhere. they have been replaced by the internet.
combined: once ubiquitous, phone books have been supplanted by the internet.
***
initial:
```
```
what are the drawbacks of living near an airbnb?
β‘ noise
β‘ parking
β‘ traffic
β‘ security
β‘ strangers
***
```
Keywords to sentences or sentence. |
DCU-NLP/electra-base-irish-cased-generator-v1 | [
"pytorch",
"electra",
"fill-mask",
"ga",
"transformers",
"irish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
DHBaek/xlm-roberta-large-korquad-mask | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-wsj0-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-wsj0-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0531
- Wer: 0.0214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1618 | 4.57 | 1000 | 0.0500 | 0.0432 |
| 0.0489 | 9.13 | 2000 | 0.0535 | 0.0291 |
| 0.0306 | 13.7 | 3000 | 0.0478 | 0.0275 |
| 0.0231 | 18.26 | 4000 | 0.0531 | 0.0214 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
DJSammy/bert-base-danish-uncased_BotXO-ai | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/angrymemorys-oldandtoothless-sadboi666_-witheredstrings/1649717075201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506323689456947207/xBvvxyQr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511852580216967169/b1Aiv2t3_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000610482331/8808c2f408b97fe3646f2dca86441506_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">makeouthill & VacuumF & Jason Hendricks & Angry Memories</div>
<div style="text-align: center; font-size: 14px;">@angrymemorys-oldandtoothless-sadboi666_-witheredstrings</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from makeouthill & VacuumF & Jason Hendricks & Angry Memories.
| Data | makeouthill | VacuumF | Jason Hendricks | Angry Memories |
| --- | --- | --- | --- | --- |
| Tweets downloaded | 321 | 425 | 3250 | 3199 |
| Retweets | 34 | 0 | 0 | 941 |
| Short tweets | 49 | 31 | 0 | 1110 |
| Tweets kept | 238 | 394 | 3250 | 1148 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nh2rd94/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angrymemorys-oldandtoothless-sadboi666_-witheredstrings's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/me7rzksi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/me7rzksi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angrymemorys-oldandtoothless-sadboi666_-witheredstrings')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DJStomp/TestingSalvoNET | [
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-NL 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 2B** in the paper, where "NL" means it is pre-trained on the Pile and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 2B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
DKpro000/DialoGPT-medium-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 2B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 2B* and further pre-trained on a dataset of multiple programming languages, and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 2B) was firstly initialized with *CodeGen-NL 2B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
DKpro000/DialoGPT-small-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Mono 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 2B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 2B* and further pre-trained on a Python programming language dataset, and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 2B) was firstly initialized with *CodeGen-Multi 2B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
DSI/TweetBasedSA | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: cc-by-4.0
widget:
- text: This house was let out in tiny tenements and was inhabited by working people of all kinds--tailors, locksmiths, cooks, Germans ofsorts, girls picking up a living as best they could, petty clerks, etc.
example_title: "Crime and Punishment"
- text: Quixote having got on his back and the duke mounted a fine horse, they placed the duchess in the middle and set out for the castle.
example_title: "Don Quixote"
- text: The noble carriage of this gentleman, for whom he believed himself to be engaged, had won Planchetβthat was the name of the Picard.
example_title: "The Three Musketeers"
---
### Description
A `roberta-base` model which has been fine tuned for token classification on the [LitBank](https://github.com/dbamman/litbank) dataset.
### Intended Use
This model is ready to be used for entity recognition. It is capable of tagging the 6 entity types from [ACE 2005](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-entities-guidelines-v6.6.pdf)
- Person (PER)
- ORG
- GPE
- LOC
- VEH
- FAC
Due to the fine-tuning domain, it is expected to work best with literary sentences.
|
DSI/human-directed-sentiment | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the π€Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner π€!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 167 | null | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: ernie-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9156566905763047
- name: F1
type: f1
value: 0.8860522622468757
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie-finetuned-qqp
This model is a fine-tuned version of [nghuyong/ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4381
- Accuracy: 0.9157
- F1: 0.8861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.2522 | 1.0 | 22741 | 0.2505 | 0.8997 | 0.8633 |
| 0.1903 | 2.0 | 45482 | 0.2645 | 0.9071 | 0.8761 |
| 0.1599 | 3.0 | 68223 | 0.2986 | 0.9115 | 0.8816 |
| 0.1214 | 4.0 | 90964 | 0.3640 | 0.9133 | 0.8828 |
| 0.0809 | 5.0 | 113705 | 0.4381 | 0.9157 | 0.8861 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DTAI-KULeuven/robbertje-1-gb-non-shuffled | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 53 | null | ---
tags:
- conversational
---
# Han Solo DialoGPT Model |
alexandrainst/da-binary-emotion-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,066 | null | **Model Description:**
This model is a Resnet18 trained in Pytorch to classify human faces orientation (Flipped or not flipped).
**Dataset:**
The model is pretrained on ImageNet and then finetuned on LFWPeople dataset. LFWPeople is a dataset of human faces. The dataset is labelled as follows:
* Flipped image -> label = 1
* Not flipped -> label = 0
**Accuracy:**
The validation accuracy of the model is 98%
|
alexandrainst/da-hatespeech-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 866 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: mi-modelo-bacan-test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8825396825396825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-modelo-bacan-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3318
- Accuracy: 0.8767
- F1: 0.8825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
alexandrainst/da-ner-base | [
"pytorch",
"tf",
"bert",
"token-classification",
"da",
"dataset:dane",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 78 | 2022-04-12T02:41:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.8980758869010411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
- Accuracy: 0.9
- F1: 0.8981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2761 | 1.0 | 250 | 0.6036 | 0.814 | 0.7881 |
| 0.4081 | 2.0 | 500 | 0.3298 | 0.9 | 0.8981 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,907 | null | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2Neo1.3BPoints2")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2Neo1.3BPoints2")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-100h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: None
---
# Wav2Vec2-Large-100h-Lv60 + Self-Training
# This is a direct state_dict transfer from fairseq to huggingface, the weights are identical
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 100 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
They show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate facebook's **Splend1dchan/wav2vec2-large-100h-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
<!-- *Result (WER)*:
| "clean" | "other" |
|---|---|
| untested | untested | --> |
Daltcamalea01/Camaleaodalt | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-12T05:35:31Z | # PHS-BERT
We present and release [PHS-BERT](https://arxiv.org/abs/2204.04521), a transformer-based pretrained language model (PLM), to identify tasks related to public health surveillance (PHS) on social media. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-of-the-art performance on 25 tested datasets, showing that our PLM is robust and generalizable in common PHS tasks.
## Usage
Load the model via [Huggingface's Transformers library](https://github.com/huggingface/transformers]):
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("publichealthsurveillance/PHS-BERT")
model = AutoModel.from_pretrained("publichealthsurveillance/PHS-BERT")
```
## Training Procedure
### Pretraining
We followed the standard pretraining protocols of BERT and initialized PHS-BERT with weights from BERT during the training phase instead of training from scratch and used the uncased version of the BERT model.
PHS-BERT is trained on a corpus of health-related tweets that were crawled via the Twitter API. Focusing on the tasks related to PHS, keywords used to collect pretraining corpus are set to disease, symptom, vaccine, and mental health-related words in English. Retweet tags were deleted from the raw corpus, and URLs and usernames were replaced with HTTP-URL and @USER, respectively. All emoticons were replaced with their associated meanings.
Each sequence of BERT LM inputs is converted to 50,265 vocabulary tokens. Twitter posts are restricted to 200 characters, and during the training and evaluation phase, we used a batch size of 8. Distributed training was performed on a TPU v3-8.
### Fine-tuning
We used the embedding of the special token [CLS] of the last hidden layer as the final feature of the input text. We adopted the multilayer perceptron (MLP) with the hyperbolic tangent activation function and used Adam optimizer. The models are trained with a one cycle policy at a maximum learning rate of 2e-05 with momentum cycled between 0.85 and 0.95.
## Societal Impact
We train and release a PLM to accelerate the automatic identification of tasks related to PHS on social media. Our work aims to develop a new computational method for screening users in need of early intervention and is not intended to use in clinical settings or as a diagnostic tool.
## BibTex entry and citation info
For more details, refer to the paper [Benchmarking for Public Health Surveillance tasks on Social Media with a Domain-Specific Pretrained Language Model](https://arxiv.org/abs/2204.04521).
```
@inproceedings{naseem-etal-2022-benchmarking,
title = "Benchmarking for Public Health Surveillance tasks on Social Media with a Domain-Specific Pretrained Language Model",
author = "Naseem, Usman and
Lee, Byoung Chan and
Khushi, Matloob and
Kim, Jinman and
Dunn, Adam",
booktitle = "Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nlppower-1.3",
doi = "10.18653/v1/2022.nlppower-1.3",
pages = "22--31",
abstract = "A user-generated text on social media enables health workers to keep track of information, identify possible outbreaks, forecast disease trends, monitor emergency cases, and ascertain disease awareness and response to official health correspondence. This exchange of health information on social media has been regarded as an attempt to enhance public health surveillance (PHS). Despite its potential, the technology is still in its early stages and is not ready for widespread application. Advancements in pretrained language models (PLMs) have facilitated the development of several domain-specific PLMs and a variety of downstream applications. However, there are no PLMs for social media tasks involving PHS. We present and release PHS-BERT, a transformer-based PLM, to identify tasks related to public health surveillance on social media. We compared and benchmarked the performance of PHS-BERT on 25 datasets from different social medial platforms related to 7 different PHS tasks. Compared with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT achieved state-of-the-art performance on all 25 tested datasets, showing that our PLM is robust and generalizable in the common PHS tasks. By making PHS-BERT available, we aim to facilitate the community to reduce the computational cost and introduce new baselines for future works across various PHS-related tasks.",
}
```
|
DanBot/TCRsynth | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8398 | 1.0 | 250 | 0.3276 | 0.9005 | 0.8966 |
| 0.2541 | 2.0 | 500 | 0.2270 | 0.9245 | 0.9249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
DanL/scientific-challenges-and-directions | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"transformers",
"generated_from_trainer"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 134 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-500sample-gpt-neo-2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-500sample-gpt-neo-2ep
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.5248 | 0.19 | 1000 | 2.9757 |
| 2.5422 | 0.37 | 2000 | 2.4397 |
| 2.1642 | 0.56 | 3000 | 2.1880 |
| 1.9135 | 0.74 | 4000 | 1.9884 |
| 1.7236 | 0.93 | 5000 | 1.8470 |
| 1.5459 | 1.11 | 6000 | 1.7501 |
| 1.4363 | 1.3 | 7000 | 1.6761 |
| 1.3639 | 1.49 | 8000 | 1.6105 |
| 1.3046 | 1.67 | 9000 | 1.5667 |
| 1.273 | 1.86 | 10000 | 1.5483 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Danbi/distilgpt2-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-12April2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-12April2022
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 294 | 0.7861 |
| 1.2483 | 2.0 | 588 | 0.6727 |
| 1.2483 | 3.0 | 882 | 0.6500 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Danih1502/t5-small-finetuned-en-to-de | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-10min-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: None
---
# Wav2Vec2-Large-10min-Lv60 + Self-Training
# This is a direct state_dict transfer from fairseq to huggingface, the weights are identical
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 10min of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
They show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate facebook's **Splend1dchan/wav2vec2-large-10min-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
<!-- *Result (WER)*:
| "clean" | "other" |
|---|---|
| untested | untested | --> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.