modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Declan/CNN_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-04-21T03:26:00Z | ---
tags:
- generated_from_trainer
model-index:
- name: focus_sum_mT5_minshi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focus_sum_mT5_minshi
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.268 | 83.33 | 500 | 0.0930 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/ChicagoTribune_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- conversational
---
# Technoblade DialoGPT Model |
Declan/ChicagoTribune_model_v7 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- neural-compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: roberta-base-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.924693520140105
---
# INT8 roberta-base-mrpc
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [roberta-base-mrpc](https://huggingface.co/Intel/roberta-base-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104.
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9177|0.9138|
| **Model size (MB)** |127|499|
#### Load with Intel® Neural Compressor:
```python
from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification
model_id = "Intel/roberta-base-mrpc-int8-static"
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id)
```
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [roberta-base-mrpc](https://huggingface.co/Intel/roberta-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9073|0.9138|
| **Model size (MB)** |243|476|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/roberta-base-mrpc-int8-static')
``` |
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-04-21T07:30:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1079
- Precision: 0.8408
- Recall: 0.8686
- F1: 0.8545
- Accuracy: 0.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 453 | 0.1322 | 0.7759 | 0.8370 | 0.8053 | 0.9498 |
| 0.246 | 2.0 | 906 | 0.1115 | 0.8284 | 0.8616 | 0.8446 | 0.9611 |
| 0.1012 | 3.0 | 1359 | 0.1079 | 0.8408 | 0.8686 | 0.8545 | 0.9638 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/Politico_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-04-21T08:04:24Z | ---
language: en
---
# CTRL44 Classification model
This is a pretrained version of the 4-class simplification operation classifier presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD classification dataset.
Predictions from this model can be used for input into the [simplification model](https://huggingface.co/liamcripwell/ctrl44-simp) to reproduce pipeline results seen in the paper.
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("liamcripwell/ctrl44-clf")
tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-clf")
text = "Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
predicted_class_name = model.config.id2label[predicted_class_id]
``` |
Declan/Reuters_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
---
# CTRL44 Simplification model
This is a pretrained version of the controllable simplification model presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD simplification dataset.
A control token is expected at the start of input sequences to dictate which simplification operation should be performed. This can either be done manually or with an operation classifier like [this one](https://huggingface.co/liamcripwell/ctrl44-clf).
Possible control tokens are: "\<ident\>", "\<para\>", "\<ssplit\>", and "\<dsplit\>".
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("liamcripwell/ctrl44-simp")
tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-simp")
text = "<para> Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=10, max_length=128)
``` |
Declan/WallStreetJournal_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-gl-jupyter9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gl-jupyter9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0970
- Wer: 0.0624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6977 | 3.36 | 400 | 0.4273 | 0.4574 |
| 0.2282 | 6.72 | 800 | 0.1492 | 0.1723 |
| 0.0884 | 10.08 | 1200 | 0.1344 | 0.1336 |
| 0.0594 | 13.44 | 1600 | 0.1329 | 0.1238 |
| 0.0437 | 16.8 | 2000 | 0.1137 | 0.1153 |
| 0.0384 | 20.17 | 2400 | 0.1197 | 0.1033 |
| 0.0332 | 23.53 | 2800 | 0.1147 | 0.0980 |
| 0.0282 | 26.89 | 3200 | 0.1079 | 0.0917 |
| 0.0236 | 30.25 | 3600 | 0.1144 | 0.0922 |
| 0.0237 | 33.61 | 4000 | 0.1130 | 0.0880 |
| 0.019 | 36.97 | 4400 | 0.1035 | 0.0818 |
| 0.0164 | 40.33 | 4800 | 0.1045 | 0.0813 |
| 0.0146 | 43.69 | 5200 | 0.1037 | 0.0735 |
| 0.0111 | 47.06 | 5600 | 0.1085 | 0.0701 |
| 0.0093 | 50.42 | 6000 | 0.1039 | 0.0659 |
| 0.0084 | 53.78 | 6400 | 0.0970 | 0.0636 |
| 0.0073 | 57.14 | 6800 | 0.0970 | 0.0624 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DeepChem/ChemBERTa-10M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 90 | null | ---
language: ca
tags:
- summarization
widget:
- text: "La Universitat Politècnica de València (UPV), a través del projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al 10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del Festival Dansa València i explora la figura geomètrica del cercle des de totes les seues perspectives: espacial, corporal i compositiva. No és amor està inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les diferents facetes d'una obsessió. Així dona cabuda a la insistència, la repetició, el trastorn, la hipnosi i l'alliberament. El procés de digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex respecte a uns altres ja realitzats a causa de l'enorme desafiament que comporta el modelatge en 3D de cossos en moviment al ritme de la composició de l'obra. L'objectiu era generar una experiència el més realista possible i fidedigna de l'original perquè el resultat final fora un procés absolutament immersiu.Així, el metaverso està compost per figures modelades en 3D al costat de quatre projeccions digitalitzades en pantalles flotants amb les quals l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els comandaments de l'ordinador, bé a través d'ulleres de realitat virtual. L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga la sensació d'una immersió quasi completa en fondre's amb el contingut audiovisual que li genere una experiència intimista i molt real."
---
# mT5 (base model), fine-tuned on the *Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset for Catalan
The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The base version of the mT5 model is pre-trained in 101 languages, including English, Spanish, Italian, Catalan and other ones.
# Model description
The mT5-base model has been fine-tuned for abstractive text summarization for Catalan.
# Training data
The mT5-base model has been fine-tuned on *the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset, specifically with the Catalan articles. The Catalan subset contains 636.596 document-summary pairs of Catalan news articles.
The DACSA dataset can be requested at the following address: https://xarrador.dsic.upv.es/resources/dacsa
# Intended uses & limitations
The model can be used for text summarization, especially in news articles.
# How to use
You can use the summarization model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="ELiRF/mt5-base-dacsa-ca")
ARTICLE = """La Universitat Politècnica de València (UPV), a través del
projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb
les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat
en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al
10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és
amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del
Festival Dansa València i explora la figura geomètrica del cercle des de totes
les seues perspectives: espacial, corporal i compositiva. No és amor està
inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les
diferents facetes d'una obsessió. Així dona cabuda a la insistència, la
repetició, el trastorn, la hipnosi i l'alliberament. El procés de
digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex
respecte a uns altres ja realitzats a causa de l'enorme desafiament que
comporta el modelatge en 3D de cossos en moviment al ritme de la composició de
l'obra. L'objectiu era generar una experiència el més realista possible i
fidedigna de l'original perquè el resultat final fora un procés absolutament
immersiu.Així, el metaverso està compost per figures modelades en 3D al costat
de quatre projeccions digitalitzades en pantalles flotants amb les quals
l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els
comandaments de l'ordinador, bé a través d'ulleres de realitat virtual.
L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga
la sensació d'una immersió quasi completa en fondre's amb el contingut
audiovisual que li genere una experiència intimista i molt real.
"""
print(summarizer(ARTICLE, truncation=True))
>>>[{'summary_text': "La Universitat Politècnica de València ha digitalitzat i modelat en 3D la primera peça de dansa en un metaverso específic."}]
```
### BibTeX entry
```bibtex
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
``` |
DeepChem/ChemBERTa-5M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy |
|--- |--- |--- |--- |--- |
| 1| 0.0| 0.901| 0.0| 0.9|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
DeepPavlov/bert-base-multilingual-cased-sentence | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"multilingual",
"arxiv:1704.05426",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 140 | null | ---
language:
- lt
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-lithuanian-fairseq
results: []
---
# wav2vec2-common_voice-lithuanian-fairseq
|
DeepPavlov/distilrubert-base-cased-conversational | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6,324 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-3
This model is a fine-tuned version of [MeshalAlamr/wav2vec2-xls-r-300m-ar-2](https://huggingface.co/MeshalAlamr/wav2vec2-xls-r-300m-ar-2) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5567
- Wer: 0.3115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1654 | 1.18 | 400 | 0.5815 | 0.4237 |
| 0.3412 | 2.35 | 800 | 0.5534 | 0.4479 |
| 0.4661 | 1.77 | 1200 | 0.6339 | 0.4915 |
| 0.441 | 2.36 | 1600 | 0.6435 | 0.5016 |
| 0.3273 | 5.88 | 2000 | 0.5338 | 0.4361 |
| 0.3099 | 7.06 | 2400 | 0.5570 | 0.4303 |
| 0.2833 | 8.24 | 2800 | 0.5731 | 0.4427 |
| 0.2714 | 9.41 | 3200 | 0.5551 | 0.4212 |
| 0.2598 | 10.59 | 3600 | 0.5757 | 0.4214 |
| 0.2458 | 11.76 | 4000 | 0.5269 | 0.4065 |
| 0.2316 | 12.94 | 4400 | 0.5469 | 0.4053 |
| 0.219 | 14.12 | 4800 | 0.5539 | 0.3912 |
| 0.2022 | 15.29 | 5200 | 0.5773 | 0.3887 |
| 0.1771 | 16.47 | 5600 | 0.5374 | 0.3623 |
| 0.176 | 17.65 | 6000 | 0.5545 | 0.3763 |
| 0.1645 | 18.82 | 6400 | 0.5332 | 0.3580 |
| 0.1501 | 20.0 | 6800 | 0.5496 | 0.3614 |
| 0.1372 | 21.18 | 7200 | 0.5716 | 0.3608 |
| 0.1325 | 22.35 | 7600 | 0.5476 | 0.3475 |
| 0.1233 | 23.53 | 8000 | 0.5657 | 0.3412 |
| 0.1148 | 24.71 | 8400 | 0.5399 | 0.3324 |
| 0.1058 | 25.88 | 8800 | 0.5678 | 0.3323 |
| 0.1004 | 27.06 | 9200 | 0.5648 | 0.3252 |
| 0.0953 | 28.24 | 9600 | 0.5594 | 0.3159 |
| 0.0875 | 29.41 | 10000 | 0.5567 | 0.3115 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DeepPavlov/distilrubert-tiny-cased-conversational | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5,993 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 3.3324 |
| No log | 2.0 | 18 | 3.1066 |
| No log | 3.0 | 27 | 3.2930 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Deniskin/essays_small_2000 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-21T13:56:24Z | ---
language:
- en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
- onnx
datasets:
- glue
metrics:
- f1
model-index:
- name: bart-large-mrpc-int8-dynamic
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.9050847457627118
---
# INT8 bart-large-mrpc
## Post-training dynamic quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9051|0.9120|
| **Model size (MB)** |547|1556.48|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bart-large-mrpc-int8-dynamic',
)
```
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9134|0.9120|
| **Model size (MB)** |395|1555|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/bart-large-mrpc-int8-dynamic')
``` |
Deniskin/essays_small_2000i | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-21T14:01:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251264359849074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8002 | 1.0 | 250 | 0.3094 | 0.9065 | 0.9038 |
| 0.2409 | 2.0 | 500 | 0.2183 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- neural-compressor
- Intel® Neural Compressor
- PostTrainingStatic
- onnx
datasets:
- glue
metrics:
- f1
model-index:
- name: xlnet-base-cased-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.8892794376098417
---
# INT8 xlnet-base-cased-mrpc
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [xlnet-base-cased-mrpc](https://huggingface.co/Intel/xlnet-base-cased-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8893|0.8897|
| **Model size (MB)** |215|448|
#### Load with Intel® Neural Compressor:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
"Intel/xlnet-base-cased-mrpc-int8-static",
)
```
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [xlnet-base-cased-mrpc](https://huggingface.co/Intel/xlnet-base-cased-mrpc).
The calibration dataloader is the eval dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8. So the real sampling size is 104.
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8935|0.8986|
| **Model size (MB)** |286|448|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/xlnet-base-cased-mrpc-int8-static')
```
|
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: electra-small-discriminator-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8983050847457628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-mrpc
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3909
- Accuracy: 0.8529
- F1: 0.8983
- Combined Score: 0.8756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
DeskDown/MarianMixFT_en-hi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- onnx
datasets:
- glue
metrics:
- f1
model-index:
- name: electra-small-discriminator-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.900709219858156
---
# INT8 electra-small-discriminator-mrpc
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [electra-small-discriminator-mrpc](https://huggingface.co/Intel/electra-small-discriminator-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so
the real sampling size is 304.
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9007|0.8983|
| **Model size (MB)** |14|51.8|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/electra-small-discriminator-mrpc-int8-static',
)
```
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [electra-small-discriminator-mrpc](https://huggingface.co/Intel/electra-small-discriminator-mrpc).
The calibration dataloader is the eval dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8. So the real sampling size is 104.
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8993|0.8983|
| **Model size (MB)** |32|52|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/electra-small-discriminator-mrpc-int8-static')
```
|
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | # LuxemBERT
LuxemBERT is a BERT model for the Luxembourgish language.
It was trained using 6.1 million Luxembourgish sentences from various sources including the Luxembourgish Wikipedia, the Leipzig Corpora Collection and rtl.lu.
In addition, we partially translated 6.1 million sentences from the German Wikipedia from German to Luxembourgish as means of data augmentation. This gave us a dataset of 12.2 million sentences we used to train our LuxemBERT model.
If you would like to use our model, please cite our paper:
```
@inproceedings{lothritz-etal-2022-luxembert,
title = "{L}uxem{BERT}: Simple and Practical Data Augmentation in Language Model Pre-Training for {L}uxembourgish",
author = "Lothritz, Cedric and
Lebichot, Bertrand and
Allix, Kevin and
Veiber, Lisa and
Bissyande, Tegawende and
Klein, Jacques and
Boytsov, Andrey and
Lefebvre, Cl{\'e}ment and
Goujon, Anne",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.543",
pages = "5080--5089",
abstract = "Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks. While these models are readily available for English and other widely spoken languages, they remain scarce for low-resource languages such as Luxembourgish. In this paper, we present LuxemBERT, a BERT model for the Luxembourgish language that we create using the following approach: we augment the pre-training dataset by considering text data from a closely related language that we partially translate using a simple and straightforward method. We are then able to produce the LuxemBERT model, which we show to be effective for various NLP tasks: it outperforms a simple baseline built with the available Luxembourgish text data as well the multilingual mBERT model, which is currently the only option for transformer-based language models in Luxembourgish. Furthermore, we present datasets for various downstream NLP tasks that we created for this study and will make available to researchers on request.",
}
``` |
DeskDown/MarianMixFT_en-ms | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
tags:
- update summarization
- longformer
- transformers
- BART
- PyTorch
- Tensorboard
- led
metrics:
- edit distance
- ROUGE
- BertScore
---
# Update Summarization with BART Large and Longformer Encoder Decoder
## Model description
This model is a Transformer-based model that supports long document generative sequence-to-sequence.
Based on [BART Large](https://huggingface.co/transformers/model_doc/bart.html) with [Longformer Encode Decoder](https://huggingface.co/transformers/model_doc/led.html) to allow for longer inputs.
The output is one edit operation which includes action (deletion or insertion), index of where edit should happen (represented by words), and the actual text to delete or insert.
## Intended uses & limitations
#### How to use
Format your data so that each new article or evidence to add have `<EV>` token in front with each title prefixed by `<t>` and each abstract prefixed by `<abs>`. Please have the original summary also in the same format. You can have the list of articles and original summary concatenated in any order as long as they have the correct separator tokens.
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
tokenizer = LEDTokenizer.from_pretrained("hyesunyun/update-summarization-led-edit-at-a-time")
model = LEDForConditionalGeneration.from_pretrained("hyesunyun/update-summarization-led-edit-at-a-time")
input = "<EV> <t> Hypoglycemic effect of bitter melon compared with metformin in newly diagnosed type 2 diabetes patients. <abs> ETHNOPHARMACOLOGICAL RELEVANCE: Bitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use. AIM OF STUDY: This study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin. MATERIALS AND METHODS: This is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks. RESULTS: There was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 mumol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 mumol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 mumol/L, respectively). CONCLUSIONS: Bitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day. <EV> <t> Momordica charantia for type 2 diabetes mellitus. <abs> There is insufficient evidence to recommend momordica charantia for type 2 diabetes mellitus. Further studies are therefore required to address the issues of standardization and the quality control of preparations. For medical nutritional therapy, further observational trials evaluating the effects of momordica charantia are needed before RCTs are established to guide any recommendations in clinical practice."
inputs_dict = tokenizer(input, padding="max_length", max_length=10240, return_tensors="pt", truncation=True)
input_ids = inputs_dict.input_ids
attention_mask = inputs_dict.attention_mask
global_attention_mask = torch.zeros_like(attention_mask)
# put global attention on <s> token
global_attention_mask[:, 0] = 1
predicted_summary_ids = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
print(tokenizer.batch_decode(predicted_summary_ids, skip_special_tokens=False))
```
The expected output should be something like `<s> insertion <edit_pad> zero <edit_pad> bla bla bla some text </s>`
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Used pre-trained [LED model](https://huggingface.co/transformers/model_doc/led.html) and fine-tuned using the dataset found in [this github repo](https://github.com/hyesunyun/update_summarization_data).
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2022}
}
``` |
DeskDown/MarianMixFT_en-my | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Despin89/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."
---
# mT5-m2o-russian-CrossSum
This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **russian**, i.e. this model tries to **summarize text written in any language in Russian.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_russian_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` |
Dilmk2/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-04-21T17:42:08Z |
---
tags:
- spacy
- token-classification
language:
- en
widget:
- text: "Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause of the coronavirus disease-19 (COVID-19) pandemic, was identified in late 2019 and caused >5 million deaths by February 2022. To date, targeted antiviral interventions against COVID-19 are limited. The spectrum of SARS-CoV-2 infection ranges from asymptomatic to fatal disease. However, the reasons for varying outcomes to SARS-CoV-2 infection are yet to be elucidated. Here we show that an endogenously activated interferon lambda (IFNλ1) pathway leads to resistance against SARS-CoV-2 infection."
- text: "The NHS is offering antibody and antiviral treatments to people with coronavirus (COVID-19) who are at highest risk of becoming seriously ill."
model-index:
- name: en_covid19_ner
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9139786332
- name: NER Recall
type: recall
value: 0.9362022309
- name: NER F Score
type: f_score
value: 0.9249569618
---
COVID 19 Bio Annotations
The dataset was taken from https://github.com/davidcampos/covid19-corpus
Dataset
The dataset was then split into several datasets each one representing one entity. Namely, Disorder, Species, Chemical or Drug, Gene and Protein, Enzyme, Anatomy, Biological Process, Molecular Function, Cellular Component, Pathway and microRNA. Moreover, another dataset is also created with all those aforementioned that are non-overlapping in nature.
Other Dataset Formats
The datasets are available in two formats IOB and Spacy's JSONL format.
IOB: https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/BIO
SpaCy JSONL: https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/SpaCy
| Feature | Description |
| --- | --- |
| **Name** | `en_covid19_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.4,<3.3.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [Santosh Tirunagai]() |
### Label Scheme
<details>
<summary>View label scheme (10 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `ANAT`, `CHED`, `COMP`, `DISO`, `ENZY`, `FUNC`, `PATH`, `PRGE`, `PROC`, `SPEC` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.50 |
| `ENTS_P` | 91.40 |
| `ENTS_R` | 93.62 |
| `TRANSFORMER_LOSS` | 311768.03 |
| `NER_LOSS` | 371171.50 | |
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9432258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1004
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9044 | 1.0 | 318 | 0.5748 | 0.7390 |
| 0.4491 | 2.0 | 636 | 0.2876 | 0.88 |
| 0.2538 | 3.0 | 954 | 0.1813 | 0.9229 |
| 0.1765 | 4.0 | 1272 | 0.1388 | 0.9294 |
| 0.1422 | 5.0 | 1590 | 0.1214 | 0.9345 |
| 0.1243 | 6.0 | 1908 | 0.1114 | 0.9406 |
| 0.1138 | 7.0 | 2226 | 0.1066 | 0.94 |
| 0.1076 | 8.0 | 2544 | 0.1030 | 0.9423 |
| 0.104 | 9.0 | 2862 | 0.1010 | 0.9419 |
| 0.1019 | 10.0 | 3180 | 0.1004 | 0.9432 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-04-21T20:09:15Z | ---
tags:
- image-generation
---
# Neural Cellular Automata (Based on https://distill.pub/2020/growing-ca/) implemented in Jax (Flax)
## Installation
from source
```bash
git clone [email protected]:shyamsn97/jax-nca.git
cd jax-nca
python setup.py install
```
from PYPI
```bash
pip install jax-nca
```
## How do NCAs work?
For more information, view the awesome article https://distill.pub/2020/growing-ca/ -- Mordvintsev, et al., "Growing Neural Cellular Automata", Distill, 2020
Image below describes a single update step: https://github.com/distillpub/post--growing-ca/blob/master/public/figures/model.svg
## Why Jax?
<b> Note: This project served as a nice introduction to jax, so its performance can probably be improved </b>
NCAs are autoregressive models like RNNs, where new states are calculated from previous ones. With jax, we can make these operations a lot more performant with `jax.lax.scan` and `jax.jit` (https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html)
Instead of writing the nca growth process as
```python
def multi_step(params, nca, current_state, num_steps):
# params: parameters for NCA
# nca: Flax Module describing NCA
# current_state: Current NCA state
# num_steps: number of steps to run
for i in range(num_steps):
current_state = nca.apply(params, current_state)
return current_state
```
We can write this with `jax.lax.scan`
```python
def multi_step(params, nca, current_state, num_steps):
# params: parameters for NCA
# nca: Flax Module describing NCA
# current_state: Current NCA state
# num_steps: number of steps to run
def forward(carry, inp):
carry = nca.apply({"params": params}, carry)
return carry, carry
final_state, nca_states = jax.lax.scan(forward, current_state, None, length=num_steps)
return final_state
```
The actual multi_step implementation can be found here: https://github.com/shyamsn97/jax-nca/blob/main/jax_nca/nca.py#L103
## Usage
See [notebooks/Gecko.ipynb](notebooks/Gecko.ipynb) for a full example
<b> Currently there's a bug with the stochastic update, so only `cell_fire_rate = 1.0` works at the moment </b>
Creating and using NCA
```python
class NCA(nn.Module):
num_hidden_channels: int
num_target_channels: int = 3
alpha_living_threshold: float = 0.1
cell_fire_rate: float = 1.0
trainable_perception: bool = False
alpha: float = 1.0
"""
num_hidden_channels: Number of hidden channels for each cell to use
num_target_channels: Number of target channels to be used
alpha_living_threshold: threshold to determine whether a cell lives or dies
cell_fire_rate: probability that a cell receives an update per step
trainable_perception: if true, instead of using sobel filters use a trainable conv net
alpha: scalar value to be multiplied to updates
"""
...
from jax_nca.nca import NCA
# usage
nca = NCA(
num_hidden_channels = 16,
num_target_channels = 3,
trainable_perception = False,
cell_fire_rate = 1.0,
alpha_living_threshold = 0.1
)
nca_seed = nca.create_seed(
nca.num_hidden_channels, nca.num_target_channels, shape=(64,64), batch_size=1
)
rng = jax.random.PRNGKey(0)
params = = nca.init(rng, nca_seed, rng)["params"]
update = nca.apply({"params":params}, nca_seed, jax.random.PRNGKey(10))
# multi step
final_state, nca_states = nca.multi_step(poarams, nca_seed, jax.random.PRNGKey(10), num_steps=32)
```
To train the NCA
```python
from jax_nca.dataset import ImageDataset
from jax_nca.trainer import EmojiTrainer
dataset = ImageDataset(emoji='🦎', img_size=64)
nca = NCA(
num_hidden_channels = 16,
num_target_channels = 3,
trainable_perception = False,
cell_fire_rate = 1.0,
alpha_living_threshold = 0.1
)
trainer = EmojiTrainer(dataset, nca, n_damage=0)
trainer.train(100000, batch_size=8, seed=10, lr=2e-4, min_steps=64, max_steps=96)
# to access train state:
state = trainer.state
# save
nca.save(state.params, "saved_params")
# load params
loaded_params = nca.load("saved_params")
``` |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-04-21T20:10:15Z | This model contains the pre-trained ResNet18 R3M model from the paper "R3M: A Universal Visual Representation for Robot Manipulation" (Nair et al.) The model is trained on the Ego4D dataset using time-contrastive learning, video-language alignment, and sparsity objectives. It is used for efficient downstream robotic learning.
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | null | Access to model Sarim24/distilbert-base-uncased-finetuned-emotion is restricted and you are not in the authorized list. Visit https://huggingface.co/Sarim24/distilbert-base-uncased-finetuned-emotion to ask for access. |
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2022-04-21T22:07:59Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.862669465085938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 |
| 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 |
| 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
distilbert-base-cased | [
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 574,859 | 2022-04-21T23:46:06Z | ---
license: apache-2.0
---
# GPT-NEO-Model for Lean Tactics
In the project, we used an HuggingFace GPT-NEO small model and fine-tuned the tactic dataset. The Input should be of the form
```
<GOAL> Goal <PROOFSTEP>
```
The model can easily be accessed using the following code.
```
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
import torch
tokenizer = GPT2Tokenizer.from_pretrained("Saisam/gpt-neo-math-small")
model = GPTNeoForCausalLM.from_pretrained("Saisam/gpt-neo-math-small")
```
More Information can be found at https://github.com/saisurbehera/mathProof.
The current model beats the GPT-F for minif2f benchmark
Worked along with Xihao Xhang and Moya Zhu
|
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | [
"pytorch",
"xlm-roberta",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | 2022-04-22T16:00:55Z | ---
language: en
thumbnail: http://www.huggingtweets.com/plsnobullywaaa/1650660437516/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511292594214551557/4T_znkpc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">clementine</div>
<div style="text-align: center; font-size: 14px;">@plsnobullywaaa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from clementine.
| Data | clementine |
| --- | --- |
| Tweets downloaded | 774 |
| Retweets | 32 |
| Short tweets | 258 |
| Tweets kept | 484 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/125ldexx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @plsnobullywaaa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2whc68l3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2whc68l3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/plsnobullywaaa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AdapterHub/roberta-base-pf-cq | [
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering",
"adapterhub:qa/cq"
] | question-answering | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0176
- Precision: 0.8418
- Recall: 0.8095
- F1: 0.8253
- Accuracy: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0268 | 0.7280 | 0.7829 | 0.7544 | 0.9908 |
| No log | 2.0 | 96 | 0.0194 | 0.8295 | 0.8050 | 0.8171 | 0.9934 |
| No log | 3.0 | 144 | 0.0176 | 0.8418 | 0.8095 | 0.8253 | 0.9937 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AdapterHub/roberta-base-pf-wikihop | [
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering",
"adapterhub:qa/wikihop"
] | question-answering | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln40")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln40")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
``` |
Aidan8756/stephenKingModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217262923032896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.9215
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3167 | 0.8995 | 0.8960 |
| 0.2493 | 2.0 | 500 | 0.2208 | 0.9215 | 0.9217 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AimB/mT5-en-kr-opus | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ## NeZha-Pytorch
pytorch版NEZHA,适配transformers
### 安装
> pip install git+https://github.com/yanqiangmiffy/Nezha-Pytorch.git
### 权重下载地址
https://github.com/lonePatient/NeZha_Chinese_PyTorch
### torch使用样例
```
import torch
from transformers import BertTokenizer
from nezha import NeZhaModel, NeZhaConfig
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained(
"quincyqiang/nezha-cn-base"
)
model = NeZhaModel.from_pretrained(
"quincyqiang/nezha-cn-base"
)
config = NeZhaConfig.from_pretrained(
"quincyqiang/nezha-cn-base"
)
model.eval()
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
```
|
Akash7897/bert-base-cased-wikitext2 | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- types: "text-generation"
---
# Yuyuan-Bart-400M
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
生物医疗领域的生成语言模型,英文的BioBART-large。
A generative language model for biomedicine, BioBART-large in English.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 领域 Domain | 余元 Yuyuan | BioBART | 400M | 英文 English |
## 模型信息 Model Information
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
Yuyuan-Bart-139M由清华大学和IDEA研究院一起提供的生物医疗领域的生成语言模型。我们使用PubMed上的生物医学研究论文摘要(约41G)作为预训练语料。使用开源框架DeepSpeed的情况下,我们在2个带有16个40GB A100 GPU的DGX结点上对BioBART-large(400M参数)进行了约168小时的训练。
The Yuyuan-Bart-139M is a biomedical generative language model jointly produced by Tsinghua University and International Digital Economy Academy (IDEA). We use biomedical research paper abstracts on PubMed (41G) as the pretraining corpora. We train the base version of BioBART(139M parameters) on 2 DGX with 16 40GB A100 GPUs for about 168 hours with the help of the open-resource framework DeepSpeed.
## 使用 Usage
```python
from transformers import BartForConditionalGeneration, BartTokenizer
tokenizer = BartTokenizer.from_pretrained('IDEA-CCNL/Yuyuan-Bart-400M')
model = BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Yuyuan-Bart-400M')
text = 'Influenza is a <mask> disease.'
input_ids = tokenizer([text], return_tensors="pt")['input_ids']
model.eval()
generated_ids = model.generate(
input_ids=input_ids,
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
print(preds)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Akash7897/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nbme-electra-large-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-electra-large-generator
This model is a fine-tuned version of [google/electra-large-generator](https://huggingface.co/google/electra-large-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0122
- Accuracy: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 195 | 0.1125 | 0.9789 |
| No log | 2.0 | 390 | 0.0141 | 0.9973 |
| 0.6233 | 3.0 | 585 | 0.0122 | 0.9977 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Akash7897/fill_mask_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nbme-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3684
- Accuracy: 0.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.99 | 101 | 2.5636 | 0.4809 |
| No log | 1.99 | 202 | 2.4075 | 0.5018 |
| No log | 2.99 | 303 | 2.3684 | 0.5070 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Akash7897/test-clm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-speech-project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-speech-project
This model is a fine-tuned version of [kingabzpro/wav2vec2-large-xls-r-300m-Urdu](https://huggingface.co/kingabzpro/wav2vec2-large-xls-r-300m-Urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Akashpb13/Central_kurdish_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ckb",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
---
# Fake News Recognition
## Overview
This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically).
LABEL_0: Fake news
LABEL_1: Real news
## Qucik Tutorial
### Download The Model
```python
from transformers import pipeline
MODEL = "jy46604790/Fake-News-Bert-Detect"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```python
text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention."
```
### Result
```python
result = clf(text)
result
```
output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}] |
Akbarariza/Anjar | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: token-classification
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- few_nerd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: few_nerd
type: few_nerd
args: supervised
metrics:
- name: Precision
type: precision
value: 0.6424480067658478
- name: Recall
type: recall
value: 0.6854236732015421
- name: F1
type: f1
value: 0.6632404008334158
- name: Accuracy
type: accuracy
value: 0.9075199647113962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
- Precision: 0.6424
- Recall: 0.6854
- F1: 0.6632
- Accuracy: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.328 | 1.0 | 8236 | 0.3197 | 0.6274 | 0.6720 | 0.6489 | 0.9041 |
| 0.2776 | 2.0 | 16472 | 0.3111 | 0.6433 | 0.6759 | 0.6592 | 0.9069 |
| 0.241 | 3.0 | 24708 | 0.3136 | 0.6424 | 0.6854 | 0.6632 | 0.9075 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Akjder/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.975925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0800
- Accuracy: 0.9759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2442 | 1.0 | 190 | 0.1605 | 0.9481 |
| 0.1529 | 2.0 | 380 | 0.0800 | 0.9759 |
| 0.151 | 3.0 | 570 | 0.0681 | 0.9759 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Aklily/Lilys | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
tags:
- conversational
co2_eq_emissions:
emissions: "370"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 Tesla P100"
license: cc-by-nc-sa-4.0
widget:
- text: "Hey kekbot! What's up?"
example_title: "Asking what's up"
- text: "Hey kekbot! How r u?"
example_title: "Asking how he is"
---
> THIS MODEL IS IN PUBLIC BETA, PLEASE DO NOT EXPECT ANY FORM OF STABILITY IN ITS CURRENT STATE.
# Art Union server chatbot
Based on a DialoGPT-medium model, fine-tuned to a small subset (52k< messages) of Art Union's general-chat channel.
### Current issues
(Which hopefully will be fixed in future iterations) Include, but not limited to:
- Limited turns, after ~11 turns output may break for no apparent reason.
- Inconsistent variance, acts like an overfitted model from time to time for no reason whatsoever. |
AkshatSurolia/DeiT-FaceMask-Finetuned | [
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | image-classification | {
"architectures": [
"DeiTForImageClassification"
],
"model_type": "deit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-ner-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9438052359513089
- name: Recall
type: recall
value: 0.9525412319084483
- name: F1
type: f1
value: 0.9481531116508919
- name: Accuracy
type: accuracy
value: 0.9910634321093416
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9116307653519484
verified: true
- name: Precision
type: precision
value: 0.9366103911345081
verified: true
- name: Recall
type: recall
value: 0.9262526113340186
verified: true
- name: F1
type: f1
value: 0.9314027058794109
verified: true
- name: loss
type: loss
value: 0.4366346299648285
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner-conll2003
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0355
- Precision: 0.9438
- Recall: 0.9525
- F1: 0.9482
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AkshatSurolia/ViT-FaceMask-Finetuned | [
"pytorch",
"safetensors",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | image-classification | {
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 40 | null | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
AkshaySg/langid | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
] | audio-classification | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9343150231634679
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9422659769731353
- name: Accuracy
type: accuracy
value: 0.9865926885265203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Precision: 0.9343
- Recall: 0.9504
- F1: 0.9423
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0834 | 1.0 | 1756 | 0.0621 | 0.9148 | 0.9381 | 0.9263 | 0.9833 |
| 0.0321 | 2.0 | 3512 | 0.0615 | 0.9265 | 0.9482 | 0.9372 | 0.9851 |
| 0.0218 | 3.0 | 5268 | 0.0595 | 0.9343 | 0.9504 | 0.9423 | 0.9866 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Al/mymodel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22 and PIE-English idioms corpus
metrics:
- perplexity
widget:
- text: "Does that mean Jane is off the hook?"
---
## DialoGPT_MWOZ_Idioms
This is a fine-tuned model of DialoGPT (medium)-MultiWOZ on the PIE-English idioms corpus. It is intended to be used as an idiom-aware conversational system.
The dataset it's trained on is limited in scope, as it covers only 10 classes of idioms ( metaphor, simile, euphemism, parallelism, personification, oxymoron, paradox, hyperbole, irony and literal).
The perplexity achieved on the test and validation sets are 160.9 and 160.46, respectively. More information about the original pre-trained model can be found [here](https://huggingface.co/tosin/dialogpt_mwoz)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | remind her to take it off the hook. |
|Bot | the bookie took it on the nod |
|User | A lengthy break is in order ; so make sure you keep your nose clean |
|Bot | they are under a cloud because of fears about US President's wife Hillary Clinton |
|User | Some families breed like rabbits and send us all their male heirs |
|Bot | he was out for the count |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_mwoz_idioms")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_mwoz_idioms")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_MWOZ_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
Alaeddin/convbert-base-turkish-ner-cased | [
"pytorch",
"convbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"ConvBertForTokenClassification"
],
"model_type": "convbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22 and AfriWOZ
metrics:
- perplexity
widget:
- text: "dedet li rek la soxla. jerejef. ba benen yoon."
---
## DialoGPT_AfriWOZ
This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Wolof language.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking.
The perplexity achieved on the test and validation sets are 20.74 and 11.66, respectively.
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | sowwu bi la bëg |
|Bot | waw, ndax mën ngan ma jox ay leeral si bërëb buñu feetee ak lekkukaay bi |
|User | soxla na dem gawu. ngir leeral rek bëg na joggee peterborough di dem cambridge |
|Bot | soxlu na dem ginaw jurrom ñenti waxtu ak ñaar fukki simili ak jurrom ben. |
|User | waw jiss sal ma ben bu seer si njëg si toluway |
|Bot | jënd bi baaxna. jerejef. ba benen yoon. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_wolof")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_wolof")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_wolof_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
AlanDev/DallEMiniButBetter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: adtabora/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adtabora/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8581
- Validation Loss: 3.6738
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8581 | 3.6738 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AlanDev/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-24T16:51:41Z | ## Taglish-Electra
Our Taglish-Electra model was pretrained with two Filipino training datasets and one English dataset to increase improvement against Filipino text with English where speakers may code-switch between the two languages.
1) Openwebtext (English)
2) WikiText-TL-39 (Filipino)
3) [TLUnified Large Scale Corpus](https://www.blaisecruz.com/resources/)
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models. |
Alberto15Romero/GptNeo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
- name: F1
type: f1
value: 0.9366968648795959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1707
- Accuracy: 0.9365
- F1: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0746 | 1.0 | 250 | 0.1932 | 0.9335 | 0.9330 |
| 0.0565 | 2.0 | 500 | 0.1774 | 0.939 | 0.9391 |
| 0.0539 | 3.0 | 750 | 0.1707 | 0.9365 | 0.9367 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AmanPriyanshu/DistilBert-Sentiment-Analysis | [
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART-large-finetuned-AMR3.0-AMR2Text
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR3.0 dataset. It achieves a sacre-bleu score of 45.0 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635
training instances, 1,722 validation instances, and 1,898 test instances.
## Intended uses & limitations
You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMR2Text")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
Andrey1989/bert-multilingual-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
model-index:
- name: t5-opus_infopankki-en-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-opus_infopankki-en-zh
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2786 | 1.0 | 1496 | 2.8797 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Andrey1989/mt5-small-finetuned-mlsum-fr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cakiki/distilbert-base-uncased-finetuned-tweet-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cakiki/distilbert-base-uncased-finetuned-tweet-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1025
- Train Sparse Categorical Accuracy: 0.9511
- Validation Loss: 0.1455
- Validation Sparse Categorical Accuracy: 0.9365
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.5409 | 0.8158 | 0.2115 | 0.9265 | 0 |
| 0.1442 | 0.9373 | 0.1411 | 0.9380 | 1 |
| 0.1025 | 0.9511 | 0.1455 | 0.9365 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.9.0-rc0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Andrey78/my_model_nlp | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: new-test-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new-test-model2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1040
- Precision: 0.9722
- Recall: 0.9757
- F1: 0.9739
- Accuracy: 0.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.1819 | 0.9360 | 0.9405 | 0.9382 | 0.9540 |
| No log | 2.0 | 302 | 0.1196 | 0.9637 | 0.9639 | 0.9638 | 0.9703 |
| No log | 3.0 | 453 | 0.1322 | 0.9614 | 0.9682 | 0.9648 | 0.9711 |
| 0.2764 | 4.0 | 604 | 0.1071 | 0.9677 | 0.9725 | 0.9701 | 0.9763 |
| 0.2764 | 5.0 | 755 | 0.1084 | 0.9709 | 0.9766 | 0.9737 | 0.9790 |
| 0.2764 | 6.0 | 906 | 0.1015 | 0.9717 | 0.9739 | 0.9728 | 0.9791 |
| 0.0342 | 7.0 | 1057 | 0.1208 | 0.9686 | 0.9727 | 0.9706 | 0.9785 |
| 0.0342 | 8.0 | 1208 | 0.1068 | 0.9680 | 0.9752 | 0.9716 | 0.9798 |
| 0.0342 | 9.0 | 1359 | 0.1028 | 0.9719 | 0.9743 | 0.9731 | 0.9807 |
| 0.0129 | 10.0 | 1510 | 0.1040 | 0.9722 | 0.9757 | 0.9739 | 0.9808 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Andrey78/my_nlp_test_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/AR_cline | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6620
- Rouge1: 24.5085
- Rouge2: 11.7925
- Rougel: 20.2631
- Rougelsum: 23.1253
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8435 | 1.0 | 35890 | 1.6753 | 24.5387 | 11.7851 | 20.2792 | 23.1595 | 18.999 |
| 1.8143 | 2.0 | 71780 | 1.6660 | 24.5268 | 11.7976 | 20.2699 | 23.1384 | 18.9996 |
| 1.816 | 3.0 | 107670 | 1.6620 | 24.5085 | 11.7925 | 20.2631 | 23.1253 | 18.9996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AnonymousSub/AR_declutr | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- nl
- en
datasets:
- yhavinga/mc4_nl_cleaned
tags:
- t5
- seq2seq
inference: false
license: apache-2.0
---
# t5-eff-xl-8l-dutch-english-cased
A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model
pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned).
This **t5 eff** model has **1240M** parameters.
It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d 19h**,
with a sequence length of **512**, batch size **64** and **538k/1703705** total steps (**18B** tokens).
Pre-training evaluation loss and accuracy are **1,3019** and **0,71**.
* Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off.
* For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture
and configs, though it must be noted that this model (t5-eff-xl-8l-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint.
* **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*.
* **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been fine-tuned for summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

Evaluation was run on fine-tuned models trained with the following settings:
| | Summarization | Translation |
|---------------:|------------------|-------------------|
| Dataset | CNN Dailymail NL | CCMatrix en -> nl |
| #train samples | 50K | 50K |
| Optimizer | Adam | Adam |
| learning rate | 0.001 | 0.0005 |
| source length | 1024 | 128 |
| target length | 142 | 128 |
|label smoothing | 0.05 | 0.1 |
| #eval samples | 1000 | 1000 |
Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores
below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation
are not saved, since they were trained for comparison of pre-trained models only.
The numbers for summarization are the Rouge scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
The models below have been evaluated for English to Dutch translation.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
The numbers reported are the Bleu scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
AnonymousSub/AR_rule_based_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model_index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metric:
name: F1
type: f1
value: 0.9327347950817506
model-index:
- name: jsoutherland/distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.925
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjczZjBkOGFlNWRiZTg3NTA1MDc2NDhhZmExNzlhYmJjYjRlODgxNTUwZDI1NDUzMjViYTU1NTE1YjY0NzhhMCIsInZlcnNpb24iOjF9.V2oCF0Y-F41cLXjFU1GpuBjI3F4D8tL5H1iizFIxn6AYC5n-3jUOYo80QBs4EmQZP449nQugdHk5-iFSWF4iAQ
- type: precision
value: 0.8954208010579672
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2U1YjZlNmNlZGRmYThjMWE5MGM3ZDcwNjVmZjlmMWU1MjAzOGUyMzgzNmVmYzYyMmUxNmFkYzdlYWE5ZGU5NyIsInZlcnNpb24iOjF9.LoFXbY2rcvQ92c5qpmxW_EfIDlDKh3I6eSrRBVWw0oZxjKzsapxeIGMdkOA6ZinQKtlWP0fQcA56jEhIjiJICA
- type: precision
value: 0.925
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjk5ZDUzNDFmMzBmZDNmNjk5N2Y1YTQzNTE2MjM3MGQ5YWM1Zjk0MmU3YzU3MmU0ODJlNjZlMjg1MWE5NjAwNyIsInZlcnNpb24iOjF9.PFWMLKFjyJSeQqKEU_HbGbI0An9bXUV13v5gNFldpmO8q08Jg_T-x5YK-NmEuQ4G56JdpKgTAAmivlhYWxRKDw
- type: precision
value: 0.9256567173431012
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGY2MTE5OGUyZGM1YWJkOGYwZjExYmE1OGUyODI3NzQyZTU1ZGM4ZDgyMTBiZWYyZTFhNzJkOWM1MjI2ZGQwYyIsInZlcnNpb24iOjF9.nNG5SP3x7xXkpP9j8H58i58CQhkObYnR6I3N65L-GJGypY86dIukYWN_5JIEeBBTb6sxGq5hed6lNYqFNd2RBw
- type: recall
value: 0.8711059962680445
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTI0ZGE1MTZmOWU1MWE3ZjI5NGFiMTg3Y2Q2YzY4MjMwOTBjODFmMTllNWI5ZGVjNTViY2NiNmNiZGQ4NjhhYiIsInZlcnNpb24iOjF9.jVQbGWGGFAwz4HXOXEd5LMhd5ayC4lORtnfexVXW-WyDpsCzHrVMHShwX4g8RiRzMpReYB_0nGniv0p9egNGDA
- type: recall
value: 0.925
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWYxMGY3MDIzMTdiNzU0MzBhNThiYjA5OTkxNjFjYjIzZmVlZjA0YTRlZDhiYzhmMDAwMTQ5NjhmYWNjY2ZjNSIsInZlcnNpb24iOjF9.Pzn0p18B43AaofOnT9ZPkTG8qwwhxUYhG9xPi35tJR7oHUsfzUtGVsWSd0BQd49W45CzUU5kUdaeZZFmSgqfAg
- type: recall
value: 0.925
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzA3Y2I0NzBjYTExODBiZjc3NDZhOTA5MzI4NjAyYWE0ZjA3NTc5ZjNiNmEzNzdmNDljZDJiOTcwNThlNTE4MiIsInZlcnNpb24iOjF9.gPDirWMiSMD6FMc_Ruz5Td7gBm4dY0EFRjcEPuzryQEMc7FTviQ5QCrnpQDJRAanlrASYqTtiUEfHtUfYURHAg
- type: f1
value: 0.8794773714607985
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTM5NjZmNjI5NjE0MTdlZTQ0YjkxMGQ4Y2QzZTBiYzc0NTg1YzYxM2UxYTI0NTBlMWU3OTE0ZjEyNWI1NmJiMyIsInZlcnNpb24iOjF9.vyL-0gLHzckauTA_xJB_2YplAJEsHnrxft6jQthjRRf3o_-jgHMe6bH2vOGfYR__XwdVUJOPadNRQVkq--b0AQ
- type: f1
value: 0.925
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmU5NWRjMGU4NTdhOTZiMTgyMjMyMTQxNDliNGNkNDUxNDJmMjZkNTgyYWFkY2ZiMDBkOWY2ODE5Zjg4YzgyNSIsInZlcnNpb24iOjF9.xX1RzQgLPX2oSwbqklGpxM7I0NZi1B7TdFfo7KJ0KmGiyOxc9zOjgm7PlBUFq0_lOWyJa4BQUXLiXmFTUc-CDQ
- type: f1
value: 0.9244781949774824
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWVhMTE3ZWFhYTJmYzQ0OGY4YmY3MzZmNzk0YmU5NGE0Zjg2ZWJhYWE2MTAzMDk3MWRlMWM0NGFhNzg3MGU4YiIsInZlcnNpb24iOjF9.NZIQRvbFt0SujrtCG9-saaLiuyO90ZFIR_uKuSzxAmfsV8eP2SBl7FHW5_L3BS2OGj5JCVjqlZVRg1OjBOm5DQ
- type: loss
value: 0.17752596735954285
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzEwMzc3MWViYTVlNjg3MmMyYjIzNDUzNjJkNGYyYTZmYzM4MDMyYzhkZDNiMDg4YzJlMjcwNmY4ZmJhMDcwYiIsInZlcnNpb24iOjF9.st-K9FBxyhOPF_u2fNcooLyT7R8IPZaalo85UurBB98fRSe2k_RzvhS9YuwxqflclJ9l66pZgcAv-hkANv10DA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- Accuracy: 0.9325
- F1: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2838 | 0.9065 | 0.9036 |
| No log | 2.0 | 500 | 0.1795 | 0.9255 | 0.9255 |
| No log | 3.0 | 750 | 0.1649 | 0.9325 | 0.9327 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 2.1.0
- Tokenizers 0.10.3
|
AnonymousSub/AR_rule_based_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/gerardoalone/1650943909493/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513716426795855876/jWAK0lo4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gay wedding technology</div>
<div style="text-align: center; font-size: 14px;">@gerardoalone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gay wedding technology.
| Data | gay wedding technology |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 406 |
| Short tweets | 737 |
| Tweets kept | 2096 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1p260sem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardoalone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p1683gy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p1683gy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gerardoalone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_only_classfn_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-S
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[**Small**](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_s, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_s")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
AnonymousSub/AR_rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-M
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[**Medium**](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_m, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_m")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
} |
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-L
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[**Large**](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_l, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_l")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-XL
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[**XLarge**](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_xl, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_xl")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/femboi_canis/1650932783971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479992104306843648/e2XQNywk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌻 Ole Grim | Femboi | Cane | It/Its | Hy/Hym 🔞</div>
<div style="text-align: center; font-size: 14px;">@femboi_canis</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌻 Ole Grim | Femboi | Cane | It/Its | Hy/Hym 🔞.
| Data | 🌻 Ole Grim | Femboi | Cane | It/Its | Hy/Hym 🔞 |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 412 |
| Short tweets | 206 |
| Tweets kept | 2589 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27g3w5y2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @femboi_canis's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jv8wsew4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jv8wsew4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/femboi_canis')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- conversational
---
# Family Guy DialoGPT Model v2
|
AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
---
---
license: apache-2.0
tags:
- Summarization
metrics:
- rouge
model-index:
- name: best_model_test_0423_small
results: []
---
# best_model_test_0423_small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6341
- Rouge1: 18.7681
- Rouge2: 6.3762
- Rougel: 18.6081
- Rougelsum: 18.6173
- Gen Len: 22.1086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 5.8165 | 0.05 | 1000 | 3.6541 | 11.6734 | 3.9865 | 11.5734 | 11.5375 | 18.0056 |
| 4.306 | 0.1 | 2000 | 3.4291 | 12.0417 | 3.8419 | 11.9231 | 11.9223 | 16.8948 |
| 4.1091 | 0.16 | 3000 | 3.3643 | 13.661 | 4.5171 | 13.5123 | 13.5076 | 19.4016 |
| 3.9637 | 0.21 | 4000 | 3.2574 | 13.8443 | 4.1761 | 13.689 | 13.6927 | 18.4288 |
| 3.8205 | 0.26 | 5000 | 3.2434 | 13.5371 | 4.3639 | 13.3551 | 13.3552 | 21.5776 |
| 3.7262 | 0.31 | 6000 | 3.1690 | 14.3668 | 4.8048 | 14.2191 | 14.1906 | 21.5548 |
| 3.6887 | 0.36 | 7000 | 3.0657 | 14.3265 | 4.436 | 14.212 | 14.205 | 20.89 |
| 3.6337 | 0.42 | 8000 | 3.0318 | 14.6809 | 4.8345 | 14.5378 | 14.5331 | 20.3651 |
| 3.5443 | 0.47 | 9000 | 3.0554 | 15.3372 | 4.9163 | 15.1794 | 15.1781 | 21.7742 |
| 3.5203 | 0.52 | 10000 | 2.9793 | 14.9278 | 4.9656 | 14.7491 | 14.743 | 20.8113 |
| 3.4936 | 0.57 | 11000 | 3.0079 | 15.7705 | 5.1453 | 15.5582 | 15.5756 | 23.4274 |
| 3.4592 | 0.62 | 12000 | 2.9721 | 15.0201 | 5.1612 | 14.8508 | 14.8198 | 22.7007 |
| 3.377 | 0.67 | 13000 | 3.0112 | 15.9595 | 5.1133 | 15.78 | 15.7774 | 23.4427 |
| 3.4158 | 0.73 | 14000 | 2.9239 | 14.7984 | 5.051 | 14.6943 | 14.6581 | 21.6009 |
| 3.378 | 0.78 | 15000 | 2.8897 | 16.5128 | 5.1923 | 16.3523 | 16.3265 | 22.0828 |
| 3.3231 | 0.83 | 16000 | 2.9347 | 16.9997 | 5.5524 | 16.8534 | 16.8737 | 22.5807 |
| 3.3268 | 0.88 | 17000 | 2.9116 | 16.0261 | 5.4226 | 15.9234 | 15.914 | 23.6988 |
| 3.3127 | 0.93 | 18000 | 2.8610 | 16.6255 | 5.3554 | 16.4729 | 16.4569 | 22.9481 |
| 3.2664 | 0.99 | 19000 | 2.8606 | 17.7703 | 5.9475 | 17.6229 | 17.6259 | 23.4423 |
| 3.1718 | 1.04 | 20000 | 2.8764 | 17.301 | 5.6262 | 17.122 | 17.1104 | 23.0093 |
| 3.0987 | 1.09 | 21000 | 2.8282 | 16.4718 | 5.2077 | 16.3394 | 16.3401 | 20.9697 |
| 3.1486 | 1.14 | 22000 | 2.8235 | 18.5594 | 5.9469 | 18.3882 | 18.3799 | 22.7291 |
| 3.1435 | 1.19 | 23000 | 2.8261 | 18.111 | 6.0309 | 17.9593 | 17.9613 | 22.9612 |
| 3.1049 | 1.25 | 24000 | 2.8068 | 17.124 | 5.5675 | 16.9714 | 16.9876 | 22.5558 |
| 3.1357 | 1.3 | 25000 | 2.8014 | 17.3916 | 5.8671 | 17.2148 | 17.2502 | 23.0075 |
| 3.0904 | 1.35 | 26000 | 2.7790 | 17.419 | 5.6689 | 17.3125 | 17.3058 | 22.1492 |
| 3.0877 | 1.4 | 27000 | 2.7462 | 17.0605 | 5.4735 | 16.9414 | 16.9378 | 21.7522 |
| 3.0694 | 1.45 | 28000 | 2.7563 | 17.752 | 5.8889 | 17.5967 | 17.619 | 23.2005 |
| 3.0498 | 1.51 | 29000 | 2.7521 | 17.9056 | 5.7754 | 17.7624 | 17.7836 | 21.9369 |
| 3.0566 | 1.56 | 30000 | 2.7468 | 18.6531 | 6.0538 | 18.5397 | 18.5038 | 22.2358 |
| 3.0489 | 1.61 | 31000 | 2.7450 | 18.4869 | 5.9297 | 18.3139 | 18.3169 | 22.0108 |
| 3.0247 | 1.66 | 32000 | 2.7449 | 18.5192 | 5.9966 | 18.3721 | 18.3569 | 22.2071 |
| 2.9877 | 1.71 | 33000 | 2.7160 | 18.1655 | 5.9294 | 18.0304 | 18.0836 | 21.4595 |
| 3.0383 | 1.76 | 34000 | 2.7202 | 18.4959 | 6.2413 | 18.3363 | 18.3431 | 22.9732 |
| 3.041 | 1.82 | 35000 | 2.6948 | 17.5306 | 5.8119 | 17.4011 | 17.4149 | 21.9435 |
| 2.9285 | 1.87 | 36000 | 2.6957 | 18.6418 | 6.1394 | 18.514 | 18.4823 | 22.5174 |
| 3.0556 | 1.92 | 37000 | 2.7000 | 18.7387 | 6.0585 | 18.5761 | 18.574 | 22.9315 |
| 3.0033 | 1.97 | 38000 | 2.6974 | 17.9387 | 6.1387 | 17.8271 | 17.8111 | 22.4726 |
| 2.9207 | 2.02 | 39000 | 2.6998 | 18.6073 | 6.1906 | 18.3891 | 18.4103 | 23.0274 |
| 2.8922 | 2.08 | 40000 | 2.6798 | 18.4017 | 6.2244 | 18.2321 | 18.2296 | 22.0697 |
| 2.8938 | 2.13 | 41000 | 2.6666 | 18.8016 | 6.2066 | 18.6411 | 18.6353 | 21.7017 |
| 2.9124 | 2.18 | 42000 | 2.6606 | 18.7544 | 6.3533 | 18.5923 | 18.5739 | 21.4303 |
| 2.8597 | 2.23 | 43000 | 2.6947 | 18.8672 | 6.4526 | 18.7416 | 18.7482 | 22.3352 |
| 2.8435 | 2.28 | 44000 | 2.6738 | 18.9405 | 6.356 | 18.7791 | 18.7729 | 21.9081 |
| 2.8672 | 2.34 | 45000 | 2.6734 | 18.7509 | 6.3991 | 18.6175 | 18.5828 | 21.8869 |
| 2.899 | 2.39 | 46000 | 2.6575 | 18.5529 | 6.3489 | 18.4139 | 18.401 | 21.7694 |
| 2.8616 | 2.44 | 47000 | 2.6485 | 18.7563 | 6.268 | 18.6368 | 18.6253 | 21.5685 |
| 2.8937 | 2.49 | 48000 | 2.6486 | 18.6525 | 6.3426 | 18.5184 | 18.5129 | 22.3337 |
| 2.8446 | 2.54 | 49000 | 2.6572 | 18.6529 | 6.2655 | 18.4915 | 18.4764 | 22.3331 |
| 2.8676 | 2.59 | 50000 | 2.6608 | 19.0913 | 6.494 | 18.929 | 18.9233 | 22.132 |
| 2.8794 | 2.65 | 51000 | 2.6583 | 18.7648 | 6.459 | 18.6276 | 18.6125 | 22.2414 |
| 2.8836 | 2.7 | 52000 | 2.6512 | 18.7243 | 6.3865 | 18.5848 | 18.5763 | 22.2551 |
| 2.8174 | 2.75 | 53000 | 2.6409 | 18.9393 | 6.3914 | 18.7733 | 18.7715 | 22.1243 |
| 2.8494 | 2.8 | 54000 | 2.6396 | 18.6126 | 6.4389 | 18.4673 | 18.4516 | 21.7638 |
| 2.9025 | 2.85 | 55000 | 2.6341 | 18.7681 | 6.3762 | 18.6081 | 18.6173 | 22.1086 |
| 2.8754 | 2.91 | 56000 | 2.6388 | 19.0828 | 6.5203 | 18.9334 | 18.9285 | 22.3497 |
| 2.8489 | 2.96 | 57000 | 2.6375 | 18.9219 | 6.4922 | 18.763 | 18.7437 | 21.9321 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/spideythefifth/1650939169930/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505089505757384712/M9ehrLtd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🏹🏳️⚧️🏳️🌈 Gandalf the Gay🏳️⚧️🏳️🌈♠️</div>
<div style="text-align: center; font-size: 14px;">@spideythefifth</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🏹🏳️⚧️🏳️🌈 Gandalf the Gay🏳️⚧️🏳️🌈♠️.
| Data | 🏹🏳️⚧️🏳️🌈 Gandalf the Gay🏳️⚧️🏳️🌈♠️ |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 289 |
| Short tweets | 1301 |
| Tweets kept | 1654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/og5nwknk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spideythefifth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2trdlzgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2trdlzgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spideythefifth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/lustfulliberal-pg13scottwatson/1661800282918/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1114620037300654082/KcWDPQsE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1231999409916764162/mo9U0uNT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Loony Liberal - Tweets or GTFO & (18+ ONLY) - The Lustful Liberal - Scorny on Main</div>
<div style="text-align: center; font-size: 14px;">@lustfulliberal-pg13scottwatson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Loony Liberal - Tweets or GTFO & (18+ ONLY) - The Lustful Liberal - Scorny on Main.
| Data | The Loony Liberal - Tweets or GTFO | (18+ ONLY) - The Lustful Liberal - Scorny on Main |
| --- | --- | --- |
| Tweets downloaded | 3234 | 3228 |
| Retweets | 1055 | 893 |
| Short tweets | 235 | 336 |
| Tweets kept | 1944 | 1999 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20f7h18q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lustfulliberal-pg13scottwatson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y0wr0ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y0wr0ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lustfulliberal-pg13scottwatson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-isear_bert
co2_eq_emissions: 0.026027055434994496
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 786224257
- CO2 Emissions (in grams): 0.026027055434994496
## Validation Metrics
- Loss: 0.8348872065544128
- Accuracy: 0.7272727272727273
- Macro F1: 0.7230931630686932
- Micro F1: 0.7272727272727273
- Weighted F1: 0.7236599456423468
- Macro Precision: 0.7328252157220334
- Micro Precision: 0.7272727272727273
- Weighted Precision: 0.7336599708829821
- Macro Recall: 0.7270448163292604
- Micro Recall: 0.7272727272727273
- Weighted Recall: 0.7272727272727273
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-isear_bert-786224257
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-isear_bert-786224257", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-isear_bert-786224257", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- zh
license: apache-2.0
inference: true
widget:
- text: "桂林市是世界闻名<mask> ,它有悠久的<mask>"
---
# Randeng-BART-139M
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLT任务,中文版的BART-base。
Good at solving NLT tasks, Chinese BART-base.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | BART | 139M | 中文-Chinese |
## 模型信息 Model Information
参考论文:[BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf)
为了得到一个中文版的BART-base,我们用悟道语料库(180G版本)进行预训练。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张A100约3天。
To get a Chinese BART-base, we use WuDao Corpora (180 GB version) for pre-training. Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 3 days with 8 A100 GPUs.
## 使用 Usage
```python
from transformers import BartForConditionalGeneration, AutoTokenizer, Text2TextGenerationPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Randeng-BART-139M', use_fast=false)
model=BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-BART-139M')
text = '桂林市是世界闻名<mask> ,它有悠久的<mask>'
text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
print(text2text_generator(text, max_length=50, do_sample=False))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
AnonymousSub/EManuals_BERT_copy_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-04-26T04:40:53Z | ---
language:
- ko
tags:
- simcse
---
# KorSTS-dev
```
"eval_cosine_pearson": 0.8461074829101562
"eval_cosine_spearman": 0.8447369732456155
"eval_euclidean_pearson": 0.8401166200637817
"eval_euclidean_spearman": 0.8441547920405729
"eval_manhattan_pearson": 0.8404706120491028
"eval_manhattan_spearman": 0.8449217524976507
"eval_dot_pearson": 0.8457739353179932
"eval_dot_spearman": 0.8440466726739222
```
# KorSTS-test
```
"eval_cosine_pearson": 0.7702209949493408
"eval_cosine_spearman": 0.7671020822573297
"eval_euclidean_pearson": 0.7617944478988647
"eval_euclidean_spearman": 0.7651634975965186
"eval_manhattan_pearson": 0.7639209032058716
"eval_manhattan_spearman": 0.7674607376361398
"eval_dot_pearson": 0.7696021795272827
"eval_dot_spearman": 0.7667385347139427
``` |
AnonymousSub/EManuals_RoBERTa_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | # Nissan Project
---
license: mit
---
## Overview
This model is based on [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) model and [roberta-base-squad2 ](https://huggingface.co/deepset/roberta-base-squad2) model. Bart-large-mnli model is a zero-shot pre-trained model so we don't need to train the model. We just input comments and features we want to classify. Roberta-base-squad2 is a Question Answering model, which helps us to filter which comment mentions the feature.
## Text-image matching
### Model Input
```python
import pandas as pd
from transformers import pipeline
QA_input = {
'question': 'Does it mention dependable?',
'context': input("Enter your sentence:")
}
```
### Model Process
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
res = nlp(QA_input)
if res['score'] > 0.1:
sentence = QA_input['context']
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli", device=0)
sequence_to_classify = sentence
candidate_labels = ['dependable', 'not dependable']
res_2 = classifier(sequence_to_classify, candidate_labels, multi_label=False)
score = res_2.get('scores')[0]*2-1
else:
score = 0
print(score)
```
## Result
If the score is zero, it means it doesn't mention the feature. Others, it gets the score of the comment with the feature we select.
### Demo code (Python Notebook)
https://github.com/vanderbilt-data-science/nissan/blob/main/30-ModelFilter/question-answering.ipynb
https://github.com/vanderbilt-data-science/nissan/blob/main/31-ModelWalkthrough/label_after_filtering.ipynb |
AnonymousSub/SR_EManuals-RoBERTa | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null |
The SBERT model was trained on the dataset of UNO sustainable development goals. The total dataset size is 20000 records. 16000 were used for training and 4000 for evaluation.
The similarity between records was calculated based on the class similarity:
0 (case 1 - no common classes)
(number of common classes)/(number of all classes) (case 2)
(number of common classes)/(maximal number of record classes)+(number of common classes)/(number of all classes) (case 3)
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 219 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 5,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/SR_bert-base-uncased | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART-large-finetuned-AMR3.0-AMRParsing
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR3.0 dataset. It achieves a Smatch of 84.2 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR3.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635
training instances, 1,722 validation instances, and 1,898 test instances.
## Intended uses & limitations
You can use the model for AMR parsing, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (tiny-sized) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosImageProcessor, YolosForObjectDetection
from PIL import Image
import torch
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
image_processor = YolosImageProcessor.from_pretrained("hustvl/yolos-tiny")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
# print results
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 300 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **28.7** on COCO 2017 validation. For more details regarding evaluation results, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
AnonymousSub/bert_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-04-26T11:17:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/cline-emanuals-s10-AR | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/cline-emanuals-s10-SR | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-26T11:24:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier_2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier_2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/consert-emanuals-s10-SR | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: reviews-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reviews-generator
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7955 | 0.08 | 500 | 3.5578 |
| 3.7486 | 0.16 | 1000 | 3.4989 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/consert-s10-AR | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
This model is a fine-tuned version of [mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis](https://huggingface.co/mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3534
- Accuracy: 0.7585
- F1: 0.7585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.8145 | 1.0 | 1222 | 0.7033 | 0.7168 | 0.7168 |
| 0.7016 | 2.0 | 2444 | 0.5936 | 0.7731 | 0.7731 |
| 0.6183 | 3.0 | 3666 | 0.5190 | 0.8046 | 0.8046 |
| 0.5516 | 4.0 | 4888 | 0.4678 | 0.8301 | 0.8301 |
| 0.4885 | 5.0 | 6110 | 0.3670 | 0.8713 | 0.8713 |
| 0.4353 | 6.0 | 7332 | 0.3119 | 0.8987 | 0.8987 |
| 0.3957 | 7.0 | 8554 | 0.2908 | 0.9084 | 0.9084 |
| 0.3386 | 8.0 | 9776 | 0.2108 | 0.9348 | 0.9348 |
| 0.2976 | 9.0 | 10998 | 0.1912 | 0.9422 | 0.9422 |
| 0.2828 | 10.0 | 12220 | 0.1496 | 0.9591 | 0.9591 |
| 0.243 | 11.0 | 13442 | 0.1326 | 0.9639 | 0.9639 |
| 0.2049 | 12.0 | 14664 | 0.1249 | 0.9693 | 0.9693 |
| 0.2041 | 13.0 | 15886 | 0.1049 | 0.9752 | 0.9752 |
| 0.1855 | 14.0 | 17108 | 0.0816 | 0.9798 | 0.9798 |
| 0.1637 | 15.0 | 18330 | 0.0733 | 0.9836 | 0.9836 |
| 0.1531 | 16.0 | 19552 | 0.0577 | 0.9880 | 0.9880 |
| 0.1221 | 17.0 | 20774 | 0.0581 | 0.9895 | 0.9895 |
| 0.1207 | 18.0 | 21996 | 0.0463 | 0.9903 | 0.9903 |
| 0.1152 | 19.0 | 23218 | 0.0472 | 0.9908 | 0.9908 |
| 0.1028 | 20.0 | 24440 | 0.0356 | 0.9936 | 0.9936 |
| 0.1027 | 21.0 | 25662 | 0.0278 | 0.9957 | 0.9957 |
| 0.0915 | 22.0 | 26884 | 0.0344 | 0.9946 | 0.9946 |
| 0.0887 | 23.0 | 28106 | 0.0243 | 0.9954 | 0.9954 |
| 0.0713 | 24.0 | 29328 | 0.0208 | 0.9969 | 0.9969 |
| 0.0749 | 25.0 | 30550 | 0.0198 | 0.9964 | 0.9964 |
| 0.0699 | 26.0 | 31772 | 0.0153 | 0.9969 | 0.9969 |
| 0.0567 | 27.0 | 32994 | 0.0144 | 0.9972 | 0.9972 |
| 0.0613 | 28.0 | 34216 | 0.0105 | 0.9982 | 0.9982 |
| 0.0567 | 29.0 | 35438 | 0.0117 | 0.9982 | 0.9982 |
| 0.0483 | 30.0 | 36660 | 0.0072 | 0.9985 | 0.9985 |
| 0.0469 | 31.0 | 37882 | 0.0063 | 0.9987 | 0.9987 |
| 0.0485 | 32.0 | 39104 | 0.0067 | 0.9985 | 0.9985 |
| 0.0464 | 33.0 | 40326 | 0.0020 | 0.9995 | 0.9995 |
| 0.0472 | 34.0 | 41548 | 0.0036 | 0.9995 | 0.9995 |
| 0.0388 | 35.0 | 42770 | 0.0016 | 0.9995 | 0.9995 |
| 0.0248 | 36.0 | 43992 | 0.0047 | 0.9990 | 0.9990 |
| 0.0396 | 37.0 | 45214 | 0.0004 | 0.9997 | 0.9997 |
| 0.0331 | 38.0 | 46436 | 0.0020 | 0.9995 | 0.9995 |
| 0.0292 | 39.0 | 47658 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 40.0 | 48880 | 0.0001 | 1.0 | 1.0 |
| 0.0285 | 41.0 | 50102 | 0.0000 | 1.0 | 1.0 |
| 0.0319 | 42.0 | 51324 | 0.0000 | 1.0 | 1.0 |
| 0.0244 | 43.0 | 52546 | 0.0000 | 1.0 | 1.0 |
| 0.0261 | 44.0 | 53768 | 0.0001 | 1.0 | 1.0 |
| 0.0256 | 45.0 | 54990 | 0.0000 | 1.0 | 1.0 |
| 0.0258 | 46.0 | 56212 | 0.0000 | 1.0 | 1.0 |
| 0.0173 | 47.0 | 57434 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 48.0 | 58656 | 0.0000 | 1.0 | 1.0 |
| 0.0241 | 49.0 | 59878 | 0.0000 | 1.0 | 1.0 |
| 0.019 | 50.0 | 61100 | 0.0000 | 1.0 | 1.0 |
| 0.0184 | 51.0 | 62322 | 0.0000 | 1.0 | 1.0 |
| 0.0139 | 52.0 | 63544 | 0.0000 | 1.0 | 1.0 |
| 0.0159 | 53.0 | 64766 | 0.0000 | 1.0 | 1.0 |
| 0.0119 | 54.0 | 65988 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 55.0 | 67210 | 0.0000 | 1.0 | 1.0 |
| 0.0166 | 56.0 | 68432 | 0.0000 | 1.0 | 1.0 |
| 0.0125 | 57.0 | 69654 | 0.0000 | 1.0 | 1.0 |
| 0.0155 | 58.0 | 70876 | 0.0000 | 1.0 | 1.0 |
| 0.0106 | 59.0 | 72098 | 0.0000 | 1.0 | 1.0 |
| 0.0083 | 60.0 | 73320 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/dummy_1 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
language: en # <-- my language
widget:
- text: "Job hunting!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she is currently looking for a job. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
AnonymousSub/hier_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- flair
language: en
datasets:
- conll2003
license: afl-3.0
---
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("Saisam/Inquirer_ner")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
``` |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-OTTO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-OTTO
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7687 | 1.0 | 17 | 3.3507 |
| 3.5069 | 2.0 | 34 | 3.2786 |
| 3.4126 | 3.0 | 51 | 3.2575 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: es
tags:
- GPT-2
- Spanish
- review
- fake
datasets:
- amazon_reviews_multi
widget:
- text: "Me ha gustado su"
example_title: "Positive review"
- text: "No quiero"
example_title: "Negative review"
license: mit
---
# GPT-2 - reviewspanish
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of text data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
In our case, we created a fined-tunned model of [Spanish GTP-2](https://huggingface.co/DeepESP/gpt2-spanish) combined with
the spanish reviews of Amazon from the HG dataset [Amazon-reviews-multi](https://huggingface.co/datasets/amazon_reviews_multi).
With this strategy, we obtain a model for text generation able to create realistic product reviews, useful for bot detection in
fake reviews.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation',
model='Amloii/gpt2-reviewspanish',
tokenizer='Amloii/gpt2-reviewspanish')
set_seed(42)
generator("Me ha gustado su", max_length=30, num_return_sequences=5)
[{'generated_text': 'Me ha gustado su tamaño y la flexibilidad de las correas, al ser de plastico las hebillas que lleva para sujetar las cadenas me han quitado el'},
{'generated_text': 'Me ha gustado su color y calidad. Lo peor de todo, es que las gafas no se pegan nada. La parte de fuera es finita'},
{'generated_text': 'Me ha gustado su rapidez y los ajustes de la correa, lo único que para mí, es poco manejable. Además en el bolso tiene una goma'},
{'generated_text': 'Me ha gustado su diseño y las dimensiones, pero el material es demasiado duro. Se nota bastante el uso pero me parece un poco caro para lo que'},
{'generated_text': 'Me ha gustado su aspecto aunque para lo que yo lo quería no me ha impresionado mucho. Las hojas tienen un tacto muy agradable que hace que puedas'}]
```
|
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- conversational
---
# Connor DialoGPT Model |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distil-bert-finetuned-log-parser-winlogbeat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distil-bert-finetuned-log-parser-winlogbeat
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: mit
language:
- en
tags:
- biology
- medical
- language models
- BioNLP
---
In-BoXBART
=============
An instruction-based unified model for performing various biomedical tasks.
You may want to check out
* Our paper (NAACL 2022 Findings): [In-BoXBART: Get Instructions into Biomedical Multi-Task Learning](https://aclanthology.org/2022.findings-naacl.10/)
* GitHub: [Click Here](https://github.com/Mihir3009/In-BoXBART)
This work explores the impact of instructional prompts on biomedical Multi-Task Learning. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to
propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks.
Direct Use
=============
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. **It should not be directly used for production or work that may directly impact people.**
How to Use
=============
You can very easily load the models with Transformers, instead of downloading them manually. The BART-base model is the backbone of our model. Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cogint/in-boxbart")
model = AutoModelForSeq2SeqLM.from_pretrained("cogint/in-boxbart")
```
Or just clone the model repo
```
git lfs install
git clone https://huggingface.co/cogint/in-boxbart
```
Inference Example
=============
Here, we provide an example for the "Document Classification" (HoC dataset) task. Once you load the model from huggigface for inference, you can append instruction given in `./templates` for that particular dataset with input instance. Below is an example of one instance.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cogint/in-boxbart")
model = AutoModelForSeq2SeqLM.from_pretrained("cogint/in-boxbart")
# Input shows how we have appended instruction from our file for HoC dataset with instance.
input = "Instruction: Definition: In this task, you are given a medical text related to cancer. Your job is to classify into zero or more classes from (1) Sustaining proliferative signaling, (2) Resisting cell death, (3) Genomic instability and mutation, (4) Activating invasion and metastasis, (5) Tumor promoting inflammation, (6) Evading growth suppressors, (7) Inducing angiogenesis (8) Enabling replicative immortality, (9) Avoiding immune destruction and (10) Cellular energetics., Positive Examples: [[input: Studies of cell-cycle progression showed that the anti-proliferative effect of Fan was associated with an increase in the G1/S phase of PC3 cells ., output: Evading growth suppressors, Sustaining proliferative signaling, explanation: Given text is classified into two categories, hence, generated label is 'Evading growth suppressors, Sustaining proliferative signaling'.] ]; Instance: input: Similar to previous studies utilizing IGF-1 , pretreatment with Roscovitine leads to a significant up-regulation of p21 expression and a significant decrease in the number of PCNA positive cells ., output: ?"
tokenized_input= tokenizer(input)
# Ideal output for this input is 'Sustaining proliferative signaling'
output = model(tokenized_input)
```
BibTeX Entry and Citation Info
===============
If you are using our model, please cite our paper:
```bibtex
@inproceedings{parmar-etal-2022-boxbart,
title = "In-{B}o{XBART}: Get Instructions into Biomedical Multi-Task Learning",
author = "Parmar, Mihir and
Mishra, Swaroop and
Purohit, Mirali and
Luo, Man and
Mohammad, Murad and
Baral, Chitta",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.10",
doi = "10.18653/v1/2022.findings-naacl.10",
pages = "112--128",
}
``` |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5495
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6633
- Rouge1: 24.5495
- Rouge2: 11.8286
- Rougel: 20.2968
- Rougelsum: 23.1682
- Gen Len: 18.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7951 | 1.0 | 35890 | 1.6633 | 24.5495 | 11.8286 | 20.2968 | 23.1682 | 18.9993 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
title: Neural Style Transfer
emoji: 📊
colorFrom: green
colorTo: green
sdk: gradio
app_file: app.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | 2022-04-27T03:18:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-en-to-ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 17.4129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-en-to-ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1606
- Bleu: 17.4129
- Gen Len: 10.8989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.3645 | 1.0 | 3596 | 2.1606 | 17.4129 | 10.8989 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-04-27T03:37:39Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: ko
widget:
- text: "코딩을 🐶🍾👟같이 하니까 맨날 장애나잖아 이 🧑🦽아"
datasets:
- jason9693/APEACH
--- |
AnonymousSub/unsup-consert-base_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language:
- en
tags:
- part-of-speech
- finetuned
license: cc-by-nc-3.0
---
# BERT-base-multilingual-cased finetuned for Part-of-Speech tagging
This is a multilingual BERT model fine tuned for part-of-speech tagging for English. It is trained using the Penn TreeBank (Marcus et al., 1993) and achieves an F1-score of 96.69.
## Usage
A *transformers* pipeline can be used to run the model:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
outputs = pipeline("A test example")
print(outputs)
```
## Citation
This model was used for all the part-of-speech tagging based results in *Analyzing Encoded Concepts in Transformer Language Models*, published at NAACL'22. If you find this model useful for your own work, please use the following citation:
```bib
@inproceedings{sajjad-NAACL,
title={Analyzing Encoded Concepts in Transformer Language Models},
author={Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Firoj Alam, Abdul Rafae Khan and Jia Xu},
booktitle={North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL)},
series={NAACL~'22},
year={2022},
address={Seattle}
}
``` |
AnonymousSub/unsup-consert-papers-bert | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: apache-2.0
tags:
- feature-extraction
- ja
- japanese
- clip
- cloob
- vision
---
# rinna/japanese-cloob-vit-b-16

This is a Japanese [CLOOB (Contrastive Leave One Out Boost)](https://arxiv.org/abs/2110.11316) model trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/).
Please see [japanese-clip](https://github.com/rinnakk/japanese-clip) for the other available models.
# How to use the model
1. Install package
```shell
$ pip install git+https://github.com/rinnakk/japanese-clip.git
```
2. Run
```python
import io
import requests
from PIL import Image
import torch
import japanese_clip as ja_clip
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = ja_clip.load("rinna/japanese-cloob-vit-b-16", device=device)
tokenizer = ja_clip.load_tokenizer()
img = Image.open(io.BytesIO(requests.get('https://images.pexels.com/photos/2253275/pexels-photo-2253275.jpeg?auto=compress&cs=tinysrgb&dpr=3&h=750&w=1260').content))
image = preprocess(img).unsqueeze(0).to(device)
encodings = ja_clip.tokenize(
texts=["犬", "猫", "象"],
max_seq_len=77,
device=device,
tokenizer=tokenizer, # this is optional. if you don't pass, load tokenizer each time
)
with torch.no_grad():
image_features = model.get_image_features(image)
text_features = model.get_text_features(**encodings)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[1.0, 0.0, 0.0]]
```
# Model architecture
The model was trained a ViT-B/16 Transformer architecture as an image encoder and uses a 12-layer BERT as a text encoder. The image encoder was initialized from the [AugReg `vit-base-patch16-224` model](https://github.com/google-research/vision_transformer).
# Training
The model was trained on [CC12M](https://github.com/google-research-datasets/conceptual-12m) translated the captions to Japanese.
# License
[The Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0) |
AnonymousSubmission/pretrained-model-1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: pt # <-- my language
widget:
- text: "Primeiro dia do novo emprego!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Hired (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets where a user mentions that she was hired in the past month. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user was recently hired (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-27T09:08:17Z | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
inference:
parameters:
max_length: 120
--- |
Anthos23/test_trainer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- faisalahmad/autotrain-data-nsut-nlp-project-textsummarization
co2_eq_emissions: 1119.6398037843474
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 791824374
- CO2 Emissions (in grams): 1119.6398037843474
## Validation Metrics
- Loss: 1.6432833671569824
- Rouge1: 38.5315
- Rouge2: 18.0869
- RougeL: 32.3742
- RougeLsum: 32.3801
- Gen Len: 19.846
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad/autotrain-nsut-nlp-project-textsummarization-791824374
``` |
gaurishhs/API | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-27T09:17:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Apisate/DialoGPT-small-jordan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-04-27T09:34:19Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ai_curio_bot/1666644371831/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562047623887986688/YRlTGiIf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ai_curio_bot</div>
<div style="text-align: center; font-size: 14px;">@ai_curio_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ai_curio_bot.
| Data | ai_curio_bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 606 |
| Tweets kept | 2643 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mpm4hdo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_curio_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eoc5mtd8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eoc5mtd8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ai_curio_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Aplinxy9plin/toxic-detection-rus | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: pt # <-- my language
widget:
- text: "VAGA - Assistente Comercial - São Paulo; Interessados mandar currículo"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Offer (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets containing a job offer. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets containing a job offer (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
Apoorva/k2t-test | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"keytotext",
"k2t",
"Keywords to Sentences",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8588964027959312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2631 | 1.0 | 525 | 0.1596 | 0.8218 |
| 0.1296 | 2.0 | 1050 | 0.1353 | 0.8479 |
| 0.0821 | 3.0 | 1575 | 0.1383 | 0.8589 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.